8000 Upgrade to DLPack 1.0. by ysiraichi · Pull Request #145000 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

Upgrade to DLPack 1.0. #145000

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 13 commits into
base: gh/ysiraichi/80/base
Choose a base branch
from
Open

Conversation

ysiraichi
Copy link
Collaborator
@ysiraichi ysiraichi commented Jan 16, 2025

Stack from ghstack (oldest at bottom):

This PR makes the necessary changes in order to upgrade PyTorch DLPack
support to version 1.0. In summary, we add support for the following:

  • Support both DLManagedTensor and DLManagedTensorVersioned when
    producing and consuming DLPack capsules
  • New parameter for __dlpack__ method: max_version
  • Version checks:
    • Fallback to old implementation if no max_version or if version
      lower than 1.0
    • Check that the to-be-consumed capsule is of version up to 1.X

In order to accommodate these new specifications, this PR adds the
following main changes:

  • torch._C._to_dlpack_versioned Python API (Module.cpp): new Python
    API for creating a versioned DLPack capsule (called by __dlpack__
    method)
  • DLPackTraits<T> class (DLConvertor.h): select the correct
    traits (e.g. capsule name, conversion functions) depending on which
    DLPack tensor class is being used
  • toDLPackImpl<T> function (DLConvertor.cpp): populates the
    common fields of both classes
  • fromDLPackImpl<T> function (DLConvertor.cpp): constructs a tensor
    from a DLPAck capsule
  • fillVersion<T> function (DLConvertor.cpp): populates the version
    field for DLManagedTensorVersioned (no-op for DLManagedTensor)
  • tensor_fromDLPackImpl<T> function (tensor_new.cpp): outer function
    for constructing a tensor out of a DLPack capsule that also marks the
    capsule as used

[ghstack-poisoned]
Copy link
pytorch-bot bot commented Jan 16, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/145000

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 2f1782e with merge base 85bfaf8 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

ysiraichi added a commit that referenced this pull request Jan 16, 2025
This PR makes the necessary changes in order to upgrade PyTorch DLPack
support to version 1.0. In summary, we add support for the following:

- Support both `DLManagedTensor` and `DLManagedTensorVersioned` when
  producing and consuming DLPack capsules
- New parameter for `__dlpack__` method: `max_version`
- Version checks:
    - Fallback to old implementation if no `max_version` or if version
      lower than 1.0
    - Check that the to-be-consumed capsule is of version up to 1.X

In order to accommodate these new specifications, this PR adds the
following main changes:

- `torch._C._to_dlpack_versioned` Python API (Module.cpp): new Python
  API for creating a versioned DLPack capsule (called by `__dlpack__`
  method)
- `DLPackTraits<T>` class (DLConvertor.h): select the correct
  capsule name depending on which DLPack tensor class is being used
- `toDLPackImpl<T>` function (DLConvertor.cpp): populates the
  common fields of both classes
- `fillVersion<T>` function (DLConvertor.cpp): populates the version
  field for `DLManagedTensorVersioned` (no-op for `DLManagedTensor`)
- `fromDLPackImpl<T>` function (tensor_new.cpp): common function for
  creating an `at::Tensor` for both classes, leaving the possible
  version check for its caller

ghstack-source-id: 3ca1169
Pull Request resolved: #145000
[ghstack-poisoned]
ysiraichi added a commit that referenced this pull request Feb 1, 2025
This PR makes the necessary changes in order to upgrade PyTorch DLPack
support to version 1.0. In summary, we add support for the following:

- Support both `DLManagedTensor` and `DLManagedTensorVersioned` when
  producing and consuming DLPack capsules
- New parameter for `__dlpack__` method: `max_version`
- Version checks:
    - Fallback to old implementation if no `max_version` or if version
      lower than 1.0
    - Check that the to-be-consumed capsule is of version up to 1.X

In order to accommodate these new specifications, this PR adds the
following main changes:

- `torch._C._to_dlpack_versioned` Python API (Module.cpp): new Python
  API for creating a versioned DLPack capsule (called by `__dlpack__`
  method)
- `DLPackTraits<T>` class (DLConvertor.h): select the correct
  capsule name depending on which DLPack tensor class is being used
- `toDLPackImpl<T>` function (DLConvertor.cpp): populates the
  common fields of both classes
- `fromDLPackImpl<T>` function (DLConvertor.cpp): constructs a tensor
  from a DLPAck capsule
- `fillVersion<T>` function (DLConvertor.cpp): populates the version
  field for `DLManagedTensorVersioned` (no-op for `DLManagedTensor`)
- `tensor_fromDLPackImpl<T>` function (tensor_new.cpp): outer function
  for constructing a tensor out of a DLPack capsule that also marks the
  capsule as used

ghstack-source-id: e2d39d9
Pull Request resolved: #145000
@ysiraichi ysiraichi added module: dlpack release notes: python_frontend python frontend release notes category labels Feb 1, 2025
[ghstack-poisoned]
ysiraichi added a commit that referenced this pull request Feb 1, 2025
This PR makes the necessary changes in order to upgrade PyTorch DLPack
support to version 1.0. In summary, we add support for the following:

- Support both `DLManagedTensor` and `DLManagedTensorVersioned` when
  producing and consuming DLPack capsules
- New parameter for `__dlpack__` method: `max_version`
- Version checks:
    - Fallback to old implementation if no `max_version` or if version
      lower than 1.0
    - Check that the to-be-consumed capsule is of version up to 1.X

In order to accommodate these new specifications, this PR adds the
following main changes:

- `torch._C._to_dlpack_versioned` Python API (Module.cpp): new Python
  API for creating a versioned DLPack capsule (called by `__dlpack__`
  method)
- `DLPackTraits<T>` class (DLConvertor.h): select the correct
  capsule name depending on which DLPack tensor class is being used
- `toDLPackImpl<T>` function (DLConvertor.cpp): populates the
  common fields of both classes
- `fromDLPackImpl<T>` function (DLConvertor.cpp): constructs a tensor
  from a DLPAck capsule
- `fillVersion<T>` function (DLConvertor.cpp): populates the version
  field for `DLManagedTensorVersioned` (no-op for `DLManagedTensor`)
- `tensor_fromDLPackImpl<T>` function (tensor_new.cpp): outer function
  for constructing a tensor out of a DLPack capsule that also marks the
  capsule as used

ghstack-source-id: e58ba67
Pull Request resolved: #145000
[ghstack-poisoned]
ysiraichi added a commit that referenced this pull request Feb 2, 2025
This PR makes the necessary changes in order to upgrade PyTorch DLPack
support to version 1.0. In summary, we add support for the following:

- Support both `DLManagedTensor` and `DLManagedTensorVersioned` when
  producing and consuming DLPack capsules
- New parameter for `__dlpack__` method: `max_version`
- Version checks:
    - Fallback to old implementation if no `max_version` or if version
      lower than 1.0
    - Check that the to-be-consumed capsule is of version up to 1.X

In order to accommodate these new specifications, this PR adds the
following main changes:

- `torch._C._to_dlpack_versioned` Python API (Module.cpp): new Python
  API for creating a versioned DLPack capsule (called by `__dlpack__`
  method)
- `DLPackTraits<T>` class (DLConvertor.h): select the correct
  capsule name depending on which DLPack tensor class is being used
- `toDLPackImpl<T>` function (DLConvertor.cpp): populates the
  common fields of both classes
- `fromDLPackImpl<T>` function (DLConvertor.cpp): constructs a tensor
  from a DLPAck capsule
- `fillVersion<T>` function (DLConvertor.cpp): populates the version
  field for `DLManagedTensorVersioned` (no-op for `DLManagedTensor`)
- `tensor_fromDLPackImpl<T>` function (tensor_new.cpp): outer function
  for constructing a tensor out of a DLPack capsule that also marks the
  capsule as used

ghstack-source-id: 1f9150c
Pull Request resolved: #145000
@ysiraichi ysiraichi requested review from rgommers and albanD February 2, 2025 16:12
[ghstack-poisoned]
ysiraichi added a commit that referenced this pull request Feb 2, 2025
This PR makes the necessary changes in order to upgrade PyTorch DLPack
support to version 1.0. In summary, we add support for the following:

- Support both `DLManagedTensor` and `DLManagedTensorVersioned` when
  producing and consuming DLPack capsules
- New parameter for `__dlpack__` method: `max_version`
- Version checks:
    - Fallback to old implementation if no `max_version` or if version
      lower than 1.0
    - Check that the to-be-consumed capsule is of version up to 1.X

In order to accommodate these new specifications, this PR adds the
following main changes:

- `torch._C._to_dlpack_versioned` Python API (Module.cpp): new Python
  API for creating a versioned DLPack capsule (called by `__dlpack__`
  method)
- `DLPackTraits<T>` class (DLConvertor.h): select the correct
  capsule name depending on which DLPack tensor class is being used
- `toDLPackImpl<T>` function (DLConvertor.cpp): populates the
  common fields of both classes
- `fromDLPackImpl<T>` function (DLConvertor.cpp): constructs a tensor
  from a DLPAck capsule
- `fillVersion<T>` function (DLConvertor.cpp): populates the version
  field for `DLManagedTensorVersioned` (no-op for `DLManagedTensor`)
- `tensor_fromDLPackImpl<T>` function (tensor_new.cpp): outer function
  for constructing a tensor out of a DLPack capsule that also marks the
  capsule as used

ghstack-source-id: 063107b
Pull Request resolved: #145000
Copy link
Collaborator
@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change sounds ok but what is the expectations in terms of BC when interracting with libraries that didn't upgrade to latest dlpack yet? Is that ok for all Tensors to be of the new version?

TORCH_API Tensor fromDLPack(DLManagedTensor* src);
TORCH_API Tensor
fromDLPack(DLManagedTensor* src, std::function<void(void*)> deleter);
TORCH_API DLManagedTensorVersioned* toDLPack(const Tensor& src);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this API used by C++ libraries? This would be a BC-breaking change for these users right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's being used by PyTorch/XLA. Yes, it's definitely BC-breaking. I was thinking that, since DLPack 1.0 should be the new default, the old version should have the name with a suffix. However, now that you brought this up, not being BC-breaking sounds more important.

In summary, I will change the names so that we are not BC-breaking.

TORCH_API DLManagedTensor* toDLPackUnversioned(const Tensor& src);
TORCH_API Tensor fromDLPack(
DLManagedTensorVersioned* src,
std::optional<std::function<void(void*)>> deleter = std::nullopt);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

std::function is optional by default and doesn't require std::optional right?

@@ -2270,7 +2270,7 @@ def compiled_with_cxx11_abi() -> builtins.bool:
matrix_rank,
solve,
)
from torch.utils.dlpack import from_dlpack, to_dlpack
from torch.utils.dlpack import from_dlpack, to_dlpack, to_dlpack_unversioned
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I understand this matches current code, I don't think we want to have this as a new top level API? Being part of torch.utils.dlpack is enough?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That comment sounds right - I don't think it should be needed. For the introduction strategy to a versioned protocol, see the explanation and prototype code (if max_version is None: ....) at https://data-apis.org/array-api/latest/API_specification/generated/array_api.array.__dlpack__.html

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it's useful/needed to have a Python function to use here so that can be called from within Tensor.__dlpack__, then making it a private function by prepending an underscore should be fine I think.

@@ -1599,6 +1616,7 @@ static std::initializer_list<PyMethodDef> TorchMethods = {
METH_NOARGS,
nullptr},
{"_to_dlpack", THPModule_toDLPack, METH_O, nullptr},
{"_to_dlpack_unversioned", THPModule_toDLPackUnversioned, METH_O, nullptr},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we still need this one?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does look necessary to me. For DLPack to change the ABI once, there's a dance that needs doing to be not-super-disruptive: continue returning the old (0.8) version, unless the consumer indicates it can handle the new (1.X) version by passing in max_version=(1, 0) (or (1, x) in the future).

try:
# Try running __dlpack__ while specifying `max_version` argument.
dlpack = ext_tensor.__dlpack__(**kwargs)
except TypeError:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it guaranteed that they will fail with TypeError?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That should be the case - it's the standard error in Python for using a non-existing keyword.

Copy link
Collaborator
@rgommers rgommers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for tackling this @ysiraichi! This looks like a good start. A few high-level comments:

  • As noted inline, __dlpack__ gained 3 new keywords, so far this adds only max_version. I suspect it's safer to add all at once, because other libraries are probably going to assume that if max_version is present, the 1.0 support is complete.
  • It would be good to have new Python-level tests in test/test_dlpack.py. That will also make the changes in logic easier to review.
    • There's one testing TODO for a very old numpy version there that may be nice to take along:
      # TODO: add interchange tests once NumPy 1.22 (dlpack support) is required
  • I think this is inactionable right now, but adding for completeness: DLPack gained a new DLPACK_FLAG_BITMASK_READ_ONLY field, which in the future can feed into PyTorch's copy-on-write (COW) feature.

try:
# Try running __dlpack__ while specifying `max_version` argument.
dlpack = ext_tensor.__dlpack__(**kwargs)
except TypeError:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That should be the case - it's the standard error in Python for using a non-existing keyword.

@@ -2270,7 +2270,7 @@ def compiled_with_cxx11_abi() -> builtins.bool:
matrix_rank,
solve,
)
from torch.utils.dlpack import from_dlpack, to_dlpack
from torch.utils.dlpack import from_dlpack, to_dlpack, to_dlpack_unversioned
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That comment sounds right - I don't think it should be needed. For the introduction strategy to a versioned protocol, see the explanation and prototype code (if max_version is None: ....) at https://data-apis.org/array-api/latest/API_specification/generated/array_api.array.__dlpack__.html

torch/_tensor.py Outdated

max_version (tuple[int, int] or None): An optional Python tuple with
2 integers, representing the maximum version the caller supports. If
None is passed, then PyTorch will fallback to DLPack 0.X, where versions
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rephrase as "If None (default), PyTorch will use DLPack 0.8".

@@ -2270,7 +2270,7 @@ def compiled_with_cxx11_abi() -> builtins.bool:
matrix_rank,
solve,
)
from torch.utils.dlpack import from_dlpack, to_dlpack
from torch.utils.dlpack import from_dlpack, to_dlpack, to_dlpack_unversioned
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it's useful/needed to have a Python function to use here so that can be called from within Tensor.__dlpack__, then making it a private function by prepending an underscore should be fine I think.

@@ -1661,7 +1661,7 @@ def __torch_function__(cls, func, types, args=(), kwargs=None):

__torch_dispatch__ = _C._disabled_torch_dispatch_impl

def __dlpack__(self, stream=None):
def __dlpack__(self, stream=None, max_version=None):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that there are also new dl_device and copy keywords (API docs).

@@ -1599,6 +1616,7 @@ static std::initializer_list<PyMethodDef> TorchMethods = {
METH_NOARGS,
nullptr},
{"_to_dlpack", THPModule_toDLPack, METH_O, nullptr},
{"_to_dlpack_unversioned", THPModule_toDLPackUnversioned, METH_O, nullptr},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does look necessary to me. For DLPack to change the ABI once, there's a dance that needs doing to be not-super-disruptive: continue returning the old (0.8) version, unless the consumer indicates it can handle the new (1.X) version by passing in max_version=(1, 0) (or (1, x) in the future).

@ysiraichi
Copy link
Collaborator Author

@albanD @rgommers Since you have already reviewed this PR, I was thinking about adding the new __dlpack__ keywords + tests in a new PR stacked on top of this one. There are 2 reasons for that:

  • It would be easier to review the new changes
  • We could merge the whole stack at once when that PR is ready

Let me know if that works for you.

@rgommers
Copy link
Collaborator
rgommers commented Feb 7, 2025

@ysiraichi a stack with two separate PRs, with the dl_device and copy keywords in the second PR, sounds perfectly fine to me. Merging the changes in one go will be nice, but functionally those two keywords don't interact with max_version, so it'll be a clean split.

@albanD
Copy link
Collaborator
albanD commented Feb 7, 2025

Separate PR on the stack sounds ok.
Note that you won't be able to land both together though if the bottom PR's CI is red though, they must all pass CI

[ghstack-poisoned]
ysiraichi added a commit that referenced this pull request Feb 7, 2025
This PR makes the necessary changes in order to upgrade PyTorch DLPack
support to version 1.0. In summary, we add support for the following:

- Support both `DLManagedTensor` and `DLManagedTensorVersioned` when
  producing and consuming DLPack capsules
- New parameter for `__dlpack__` method: `max_version`
- Version checks:
    - Fallback to old implementation if no `max_version` or if version
      lower than 1.0
    - Check that the to-be-consumed capsule is of version up to 1.X

In order to accommodate these new specifications, this PR adds the
following main changes:

- `torch._C._to_dlpack_versioned` Python API (Module.cpp): new Python
  API for creating a versioned DLPack capsule (called by `__dlpack__`
  method)
- `DLPackTraits<T>` class (DLConvertor.h): select the correct
  capsule name depending on which DLPack tensor class is being used
- `toDLPackImpl<T>` function (DLConvertor.cpp): populates the
  common fields of both classes
- `fromDLPackImpl<T>` function (DLConvertor.cpp): constructs a tensor
  from a DLPAck capsule
- `fillVersion<T>` function (DLConvertor.cpp): populates the version
  field for `DLManagedTensorVersioned` (no-op for `DLManagedTensor`)
- `tensor_fromDLPackImpl<T>` function (tensor_new.cpp): outer function
  for constructing a tensor out of a DLPack capsule that also marks the
  capsule as used

ghstack-source-id: 2b774ce
Pull Request resolved: #145000
@leofang
Copy link
Contributor
leofang commented Mar 11, 2025

Gentle ping @ysiraichi, any chance we can get this work wrapped up in the near future? Note that we just tagged DLPack v1.1: https://github.com/dmlc/dlpack/releases/tag/v1.1, which added a few more dtype enums.

@ysiraichi
Copy link
Collaborator Author

Yeah. I'm still working on this. Have 2 more PRs to be opened.
Haven't had the time to get back to them, though. Will do so soon.

[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
@ysiraichi
Copy link
Collaborator Author

Here is a summary of the latest updates after the last reviews:

  • Renamed the DLPack API, removing the Unversioned suffix
  • Renamed the DLPack API, adding Versioned to the newly added functions
  • Removed the import of to_dlpack_unversioned
    • Using torch._C._to_dlpack() and torch._C._to_dlpack_versioned() directly
  • Added test_max_version.py, which tests both the versioned and unversioned execution paths

@albanD @rgommers
Sorry for the delay on this PR.
Let me know what you think of this new set of changes.

[ghstack-poisoned]
@ysiraichi
Copy link
Collaborator Author

Hi, @albanD @rgommers. Sorry for taking too much time on this. I believe this stack should be good for another round of reviews. Let me know what you think.

[ghstack-poisoned]

template <class T>
at::Tensor tensor_fromDLPackImpl(PyObject* data, T* tensor) {
// HACK: Ensure that we hold the GIL here just in case the
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

n00b q: what's the context here?

2851
@@ -1690,9 +1690,14 @@ def __dlpack__(self, stream=None):
both streams. If None or -1 is passed then no synchronization is performed.
If 1 (on CUDA) or 0 (on ROCM) then the default stream is used for
synchronization.

max_version (tuple[int, int] or None): An optional Python tuple with
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

whats the tradeoff with being conservative on the default value?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

class DLDeviceType(enum.IntEnum):
# Enums as in DLPack specification (aten/src/ATen/dlpack.h)
kDLCPU = 1,
kDLGPU = 2,
kDLCPUPinned = 3,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why change the name?

@msaroufim
Copy link
Member

Will defer to @albanD for merge but this seems like a fairly safe PR to land considering BC guarantees are there and despite an upgrade it doesn't change the way dlpack works in pytorch all that much and if anyhing adds more layers of tests in every new PR

Copy link
Collaborator
@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good!
Mostly small comments.
The main thing would be to add a BC-breaking note on exactly which user facing APIs are changed.

*
* \note This is the current standard DLPack exchange data structure.
*/
struct DLManagedTensorVersioned {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume this struct is defined by the dlpack standard?
I'm curious why we define all of these by hand vs using a standard include from dlpack? I guess it doesn't exist?

def test(device, **kwargs):
inp = make_tensor((5,), dtype=torch.float32, device=device)
out = torch.from_dlpack(inp.__dlpack__(**kwargs))
self.assertEqual(inp, out)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we query the version field on the dlpack Tensor to ensure we get the object we want?

@@ -1673,7 +1673,7 @@ def __torch_function__(cls, func, types, args=(), kwargs=None):

__torch_dispatch__ = _C._disabled_torch_dispatch_impl

def __dlpack__(self, stream=None):
def __dlpack__(self, stream=None, max_version=None):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are kwargs only as well.
This will help with adding more kwargs later without any issue

@@ -1690,9 +1690,14 @@ def __dlpack__(self, stream=None):
both streams. If None or -1 is passed then no synchronization is performed.
If 1 (on CUDA) or 0 (on ROCM) then the default stream is used for
synchronization.

max_version (tuple[int, int] or None): An optional Python tuple with
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"""
if has_torch_function_unary(self):
return handle_torch_function(Tensor.__dlpack__, (self,), self, stream)
args = (self, stream, max_version)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move to kwargs

"""
if has_torch_function_unary(self):
return handle_torch_function(Tensor.__dlpack__, (self,), self, stream)
args = (self, stream, max_version)
return handle_torch_function(Tensor.__dlpack__, (self,), *args)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bug below not from this PR: we should check both is_conj() and is_neg() as they will fail the same way!

class DLDeviceType(enum.IntEnum):
# Enums as in DLPack specification (aten/src/ATen/dlpack.h)
kDLCPU = 1,
kDLGPU = 2,
kDLCPUPinned = 3,
kDLCUDA = 2,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are BC-breaking right?

try:
# Try running __dlpack__ while specifying `max_version` argument.
dlpack = ext_tensor.__dlpack__(**kwargs)
except TypeError:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to the standard, it says BufferError should be raised if you don't get a good max_version. Should we handle that here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants
0