8000 use identity op for alpha=inf in torch.celu and quantized_celu by redwrasse · Pull Request #148066 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

use identity op for alpha=inf in torch.celu and quantized_celu #148066

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

redwrasse
Copy link
Contributor
@redwrasse redwrasse commented Feb 27, 2025

Fixes #148065

This MR short-circuits the celu and quantized_celu ops to just return the input in the case alpha=inf, so the implementation of celu(x, inf) is defined on all x.

import torch

# (same for torch.ao.nn.quantized.functional.celu)


# Befor
8000
e
# -----------
x = torch.tensor(2.)
print(torch.celu(x, torch.inf))
# tensor(2.)
print(torch.celu(-x, torch.inf))
# tensor(nan)

x = torch.tensor(0.)
print(torch.celu(x, torch.inf))
# tensor(nan)



# After
# --------
x = torch.tensor(2.)
print(torch.celu(x, torch.inf))
# tensor(2.)
print(torch.celu(-x, torch.inf))
# tensor(-2.)

x = torch.tensor(0.)
print(torch.celu(x, torch.inf))
# tensor(0.)

cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168

@pytorch-bot pytorch-bot bot added module: cpu CPU specific problem (e.g., perf, algorithm) release notes: quantization release notes category labels Feb 27, 2025
Copy link
pytorch-bot bot commented Feb 27, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/148066

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@zou3519 zou3519 added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Feb 27, 2025
Copy link
Contributor

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the Stale label Apr 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: cpu CPU specific problem (e.g., perf, algorithm) open source release notes: quantization release notes category Stale triaged This issue has been looked at a team member, and triaged and prioritized into an appr 4C21 opriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support alpha=inf consistently for torch.celu
3 participants
0