8000 `torchtyping` annotations make saving to Torchscript fail · Issue #87781 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

torchtyping annotations make saving to Torchscript fail #87781

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
augustebaum opened this 8000 issue Oct 26, 2022 · 0 comments
Open

torchtyping annotations make saving to Torchscript fail #87781

augustebaum opened this issue Oct 26, 2022 · 0 comments
Labels
oncall: jit Add this issue/PR to JIT oncall triage queue

Comments

@augustebaum
Copy link
augustebaum commented Oct 26, 2022

🐛 Describe the bug

It seems that there is already some interest in offering a type-annotation system for PyTorch.
This issue mentions torchtyping. However, on trying to save a nn.Module that includes such annotations, to TorchScript, I get an issue that looks like this:

RuntimeError:
       Unknown type constructor TensorType:
         File "my/code/vae.py", line 34
           def forward(
               self, x: TensorType["batch_size", "input_dim"]
                        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
           ) -> TensorType["batch_size", "input_dim"]:

It's very unclear to me why a type annotation would impact the TorchScript saving process, but I don't know anything about TorchScript.

However, the torchtyping maintainer seems to have looked into it and they mentioned a year ago that the issue lies with the TorchScript implementation. See the full discussion here: patrick-kidger/torchtyping#13. Since they recommended contacting the TorchScript maintainers, and I could not find an issue about it in this project, I'd like to bring this to the attention to whom it may concern, if they're not already aware of this.

Some kind people have shared workarounds in the comments of this issue, and they seem to work.

Thanks for your attention!

Versions

PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: macOS 12.5.1 (x86_64)
GCC version: Could not collect
Clang version: 14.0.0 (clang-1400.0.29.102)
CMake version: version 3.24.2
Libc version: N/A

Python version: 3.10.7 (main, Sep 15 2022, 01:51:29) [Clang 14.0.0 (clang-1400.0.29.102)] (64-bit runtime)
Python platform: macOS-12.5.1-x86_64-i386-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] pytorch-lightning==1.7.7
[pip3] torch==1.12.1
[pip3] torchmetrics==0.10.0
[pip3] torchtyping==0.1.4
[conda] Could not collect

cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel

@bdhirsh bdhirsh added the oncall: jit Add this issue/PR to JIT oncall triage queue label Oct 27, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
oncall: jit Add this issue/PR to JIT oncall triage queue
Projects
None yet
Development

No branches or pull requests

2 participants
0