8000 remove guard_size_oblivious from unbind. by laithsakka · Pull Request #148815 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

remove guard_size_oblivious from unbind. #148815

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 5 commits into from

Conversation

laithsakka
Copy link
Contributor
@laithsakka laithsakka commented Mar 8, 2025

Stack from ghstack (oldest at bottom):

unbind will always specialize on dim, because it determine the number of output tensors.
guard_size_oblivious is not useful there and more confusing probably for code readers
added a comment and a test that verifies the specialization.

Copy link
pytorch-bot bot commented Mar 8, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/148815

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 98737ac with merge base 2ee2317 (image):

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

laithsakka added a commit that referenced this pull request Mar 8, 2025
ghstack-source-id: d88a7cb
Pull Request resolved: #148815
@laithsakka laithsakka changed the title remove guard_size_oblivious from unbind [DRAFT] remove guard_size_oblivious from unbind, but anyway seems like its useless there? Mar 8, 2025
@laithsakka laithsakka mentioned this pull request Mar 8, 2025
@laithsakka laithsakka changed the title [DRAFT] remove guard_size_oblivious from unbind, but anyway seems like its useless there? [DRAFT] remove guard_size_oblivious from unbind. Mar 8, 2025
This was interesting why?
i initially thought this is the best rewrite for this
```
   
    # Looking at this what will happen because of guard_size_oblivious/guard_or_false 
    # At least for normal tesnors we would throw during runtime when t.shape[dim] == 0 the following error. 
    # RuntimeError: number of sections must be larger than 0, got 0
    # Because  of calling torch.tensor_split(t, t.shape[dim], dim) 

    # Maybe a better error msg alternative here, is to throw in such case an error
    # RuntimeError: unbind unbacked semantics does not support 0 inputs during runtime.
    def fall_back(): 
        torch._check(t.shape[dim] == 0,
         lambda: "unbind unbacked semantics does not support 0 inputs during runtime.")
        return False
     if guard_or_else(t.shape[dim] == 0, fall_back):
        return ()
    else:
        return tuple(
            torch.squeeze(s, dim) for s in torch.tensor_split(t, t.shape[dim], dim)
        )


```
but then i tried to add examples
ex:
```
from torch.fx.experimental.symbolic_shapes import sym_or
torch.compile(fullgraph=True)
def func(y):
      return y.unbind(dim=2)

z = torch.ones(1,1,1,1)
func(z)
torch._dynamo.decorators.mark_unbacked(z, 2)
```
but we actually do hit another guard 

Specifically guard_int() bellow

```
std::vector<Tensor> tensor_split_sections_symint(
    const Tensor& self,
    c10::SymInt sym_sections,
    int64_t dim) {
  TORCH_CHECK(
      self.dim() > 0,
      "tensor_split expected at least a 1-dimensional tensor, but got a tensor with ",
      self.dim(),
      " dims");
  int64_t dim_ = maybe_wrap_dim(dim, self.dim());
  // NB: intentional, sections specifies number of output tensors, which
  // cannot be polymorphic
  int64_t sections = sym_sections.guard_int(__FILE__, __LINE__);
 ```
  so my thoughts, is i do not know what is being avoid by adding this, it was added in its on PR so assume for a reason?
  #124959
  I will keep the current semantics of guard_size_oblivious and just use guard_or_false.
  
  maybe for other types of tensors  tensor_split_sections_symint does not throw runtime error or re-guard?
  idk

[ghstack-poisoned]
Divigroup-RAP pushed a commit to Divigroup-RAP/PYTORCH that referenced this pull request Apr 22, 2025
This was interesting why?
i initially thought this is the best rewrite for this
```
   
    # Looking at this what will happen because of guard_size_oblivious/guard_or_false 
    # At least for normal tesnors we would throw during runtime when t.shape[dim] == 0 the following error. 
    # RuntimeError: number of sections must be larger than 0, got 0
    # Because  of calling torch.tensor_split(t, t.shape[dim], dim) 

    # Maybe a better error msg alternative here, is to throw in such case an error
    # RuntimeError: unbind unbacked semantics does not support 0 inputs during runtime.
    def fall_back(): 
        torch._check(t.shape[dim] == 0,
         lambda: "unbind unbacked semantics does not support 0 inputs during runtime.")
        return False
     if guard_or_else(t.shape[dim] == 0, fall_back):
        return ()
    else:
        return tuple(
            torch.squeeze(s, dim) for s in torch.tensor_split(t, t.shape[dim], dim)
        )


```
but then i tried to add examples
ex:
```
from torch.fx.experimental.symbolic_shapes import sym_or
torch.compile(fullgraph=True)
def func(y):
      return y.unbind(dim=2)

z = torch.ones(1,1,1,1)
func(z)
torch._dynamo.decorators.mark_unbacked(z, 2)
```
but we actually do hit another guard 

Specifically guard_int() bellow

```
std::vector<Tensor> tensor_split_sections_symint(
    const Tensor& self,
    c10::SymInt sym_sections,
    int64_t dim) {
  TORCH_CHECK(
      self.dim() > 0,
      "tensor_split expected at least a 1-dimensional tensor, but got a tensor with ",
      self.dim(),
      " dims");
  int64_t dim_ = maybe_wrap_dim(dim, self.dim());
  // NB: intentional, sections specifies number of output tensors, which
  // cannot be polymorphic
  int64_t sections = sym_sections.guard_int(__FILE__, __LINE__);
 ```
  so my thoughts, is i do not know what is being avoid by adding this, it was added in its on PR so assume for a reason?
  #124959
  I will keep the current semantics of guard_size_oblivious and just use guard_or_false.
  
  maybe for other types of tensors  tensor_split_sections_symint does not throw runtime error or re-guard?
  idk

[ghstack-poisoned]
laithsakka added a commit that referenced this pull request May 2, 2025
ghstack-source-id: 41f4725
Pull Request resolved: #148815
@laithsakka laithsakka requested a review from aorenste May 2, 2025 17:02
@laithsakka laithsakka changed the title [DRAFT] remove guard_size_oblivious from unbind. remove guard_size_oblivious from unbind. May 2, 2025
@laithsakka laithsakka requested review from bobrenjc93 and pianpwk May 2, 2025 17:04

unbind will always specialize on dim, because it determine the number of output tensors.
guard_size_oblivious is not useful there and more confusing probably for code readers 
added a comment and a test that verifies the specialization. 

[ghstack-poisoned]
laithsakka added a commit that referenced this pull request Jun 12, 2025
ghstack-source-id: 4ae951c
Pull Request resolved: #148815

unbind will always specialize on dim, because it determine the number of output tensors.
guard_size_oblivious is not useful there and more confusing probably for code readers 
added a comment and a test that verifies the specialization. 

[ghstack-poisoned]
laithsakka added a commit that referenced this pull request Jun 21, 2025
ghstack-source-id: b092b86
Pull Request resolved: #148815
@laithsakka laithsakka added the topic: not user facing topic category label Jun 25, 2025
@laithsakka
Copy link
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Jun 25, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

The merge job was canceled or timed out. This most often happen if two merge requests were issued for the same PR, or if merge job was waiting for more than 6 hours for tests to finish. In later case, please do not hesitate to reissue the merge command
For more information see pytorch-bot wiki.

@laithsakka
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request Merged topic: not user facing topic category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
< 2907 /div>
0