-
Notifications
You must be signed in to change notification settings - Fork 24.7k
remove guard_size_oblivious from unbind. #148815
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/148815
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit 98737ac with merge base 2ee2317 ( UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This was interesting why? i initially thought this is the best rewrite for this ``` # Looking at this what will happen because of guard_size_oblivious/guard_or_false # At least for normal tesnors we would throw during runtime when t.shape[dim] == 0 the following error. # RuntimeError: number of sections must be larger than 0, got 0 # Because of calling torch.tensor_split(t, t.shape[dim], dim) # Maybe a better error msg alternative here, is to throw in such case an error # RuntimeError: unbind unbacked semantics does not support 0 inputs during runtime. def fall_back(): torch._check(t.shape[dim] == 0, lambda: "unbind unbacked semantics does not support 0 inputs during runtime.") return False if guard_or_else(t.shape[dim] == 0, fall_back): return () else: return tuple( torch.squeeze(s, dim) for s in torch.tensor_split(t, t.shape[dim], dim) ) ``` but then i tried to add examples ex: ``` from torch.fx.experimental.symbolic_shapes import sym_or torch.compile(fullgraph=True) def func(y): return y.unbind(dim=2) z = torch.ones(1,1,1,1) func(z) torch._dynamo.decorators.mark_unbacked(z, 2) ``` but we actually do hit another guard Specifically guard_int() bellow ``` std::vector<Tensor> tensor_split_sections_symint( const Tensor& self, c10::SymInt sym_sections, int64_t dim) { TORCH_CHECK( self.dim() > 0, "tensor_split expected at least a 1-dimensional tensor, but got a tensor with ", self.dim(), " dims"); int64_t dim_ = maybe_wrap_dim(dim, self.dim()); // NB: intentional, sections specifies number of output tensors, which // cannot be polymorphic int64_t sections = sym_sections.guard_int(__FILE__, __LINE__); ``` so my thoughts, is i do not know what is being avoid by adding this, it was added in its on PR so assume for a reason? #124959 I will keep the current semantics of guard_size_oblivious and just use guard_or_false. maybe for other types of tensors tensor_split_sections_symint does not throw runtime error or re-guard? idk [ghstack-poisoned]
ghstack-source-id: 3dd8fef Pull Request resolved: pytorch/pytorch#148815
This was interesting why? i initially thought this is the best rewrite for this ``` # Looking at this what will happen because of guard_size_oblivious/guard_or_false # At least for normal tesnors we would throw during runtime when t.shape[dim] == 0 the following error. # RuntimeError: number of sections must be larger than 0, got 0 # Because of calling torch.tensor_split(t, t.shape[dim], dim) # Maybe a better error msg alternative here, is to throw in such case an error # RuntimeError: unbind unbacked semantics does not support 0 inputs during runtime. def fall_back(): torch._check(t.shape[dim] == 0, lambda: "unbind unbacked semantics does not support 0 inputs during runtime.") return False if guard_or_else(t.shape[dim] == 0, fall_back): return () else: return tuple( torch.squeeze(s, dim) for s in torch.tensor_split(t, t.shape[dim], dim) ) ``` but then i tried to add examples ex: ``` from torch.fx.experimental.symbolic_shapes import sym_or torch.compile(fullgraph=True) def func(y): return y.unbind(dim=2) z = torch.ones(1,1,1,1) func(z) torch._dynamo.decorators.mark_unbacked(z, 2) ``` but we actually do hit another guard Specifically guard_int() bellow ``` std::vector<Tensor> tensor_split_sections_symint( const Tensor& self, c10::SymInt sym_sections, int64_t dim) { TORCH_CHECK( self.dim() > 0, "tensor_split expected at least a 1-dimensional tensor, but got a tensor with ", self.dim(), " dims"); int64_t dim_ = maybe_wrap_dim(dim, self.dim()); // NB: intentional, sections specifies number of output tensors, which // cannot be polymorphic int64_t sections = sym_sections.guard_int(__FILE__, __LINE__); ``` so my thoughts, is i do not know what is being avoid by adding this, it was added in its on PR so assume for a reason? #124959 I will keep the current semantics of guard_size_oblivious and just use guard_or_false. maybe for other types of tensors tensor_split_sections_symint does not throw runtime error or re-guard? idk [ghstack-poisoned]
unbind will always specialize on dim, because it determine the number of output tensors. guard_size_oblivious is not useful there and more confusing probably for code readers added a comment and a test that verifies the specialization. [ghstack-poisoned]
unbind will always specialize on dim, because it determine the number of output tensors. guard_size_oblivious is not useful there and more confusing probably for code readers added a comment and a test that verifies the specialization. [ghstack-poisoned]
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
The merge job was canceled or timed out. This most often happen if two merge requests were issued for the same PR, or if merge job was waiting for more than 6 hours for tests to finish. In later case, please do not hesitate to reissue the merge command |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Stack from ghstack (oldest at bottom):
unbind will always specialize on dim, because it determine the number of output tensors.
guard_size_oblivious is not useful there and more confusing probably for code readers
added a comment and a test that verifies the specialization.