8000 Work around MPSGraph issue in backward pass of nn.ReplicationPad1d/2d by xwu-498 · Pull Request #152094 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

Work around MPSGraph issue in backward pass of nn.ReplicationPad1d/2d #152094

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

xwu-498
Copy link
@xwu-498 xwu-498 commented Apr 24, 2025

Fixes #135447.

When the 3rd from last dimension is 2^16 or greater, MPSGraph returns 0 for padgradient.
To work around this, we break the problematic dimension into chunks with chunk size being
no greater than 2^16 - 1.

Test case for nn.ReplicationPad1d:

    shape = [65739, 2, 4]
    x_cpu = torch.randn(shape, device='cpu', requires_grad=True)
    x_mps = x_cpu.clone().detach().to('mps').requires_grad_(True)
    model = torch.nn.ReplicationPad1d((1, 1))

    out_cpu = model(x_cpu)
    out_mp
8000
s = model(x_mps)

    # backward
    g_cpu = torch.randn_like(out_cpu)
    g_mps = g_cpu.clone().detach().to('mps').requires_grad_(False)
    out_cpu.backward(g_cpu)
    out_mps.backward(g_mps)

    print(f"{((x_cpu.grad - x_mps.grad.cpu()).abs() > 1e-5).sum() = }")

    # Expected Output:
    # ((x_cpu.grad - x_mps.grad.cpu()).abs() > 1e-5).sum() = tensor(0)

Test case for nn.ReplicationPad2d,

    shape = [2, 65739, 2, 4]
    x_cpu = torch.randn(shape, device='cpu', requires_grad=True)
    x_mps = x_cpu.clone().detach().to('mps').requires_grad_(True)
    model = torch.nn.ReplicationPad2d((1, 1, 1, 1))

    out_cpu = model(x_cpu)
    out_mps = model(x_mps)

    # backward
    g_cpu = torch.randn_like(out_cpu)
    g_mps = g_cpu.clone().detach().to('mps').requires_grad_(False)
    out_cpu.backward(g_cpu)
    out_mps.backward(g_mps)

    print(f"{((x_cpu.grad - x_mps.grad.cpu()).abs() > 1e-5).sum() = }")

    # Expected Output:
    # ((x_cpu.grad - x_mps.grad.cpu()).abs() > 1e-5).sum() = tensor(0)

These tests produce expected output with this workaround.

Copy link
pytorch-bot bot commented Apr 24, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/152094

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Copy link
linux-foundation-easycla bot commented Apr 24, 2025

CLA Signed

The committers listed above are authorized under a signed CLA.

@pytorch-bot pytorch-bot bot added the release notes: mps Release notes category label Apr 24, 2025
@xwu-498 xwu-498 force-pushed the fix-pad-grad branch 2 times, most recently from 97ba670 to 9256e4e Compare April 24, 2025 23:34
@xwu-498 xwu-498 changed the title Work around MPSGraph issue in handling backward pass of nn.Replicatio… Work around MPSGraph issue in backward pass of nn.ReplicationPad1d/2d. Apr 24, 2025
@xwu-498 xwu-498 changed the title Work around MPSGraph issue in backward pass of nn.ReplicationPad1d/2d. Work around MPSGraph issue in backward pass of nn.ReplicationPad1d/2d Apr 24, 2025
@soulitzer soulitzer added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Apr 28, 2025
Fixes pytorch#135447.

When the 3rd from last dimension is 2^16 or greater, MPSGraph returns 0 for padgradient.
To work around this, we break the problematic dimension into chunks with chunk size being
no greater than 2^16 - 1.

Test case for nn.ReplicationPad1d:
```
    shape = [65739, 2, 4]
    x_cpu = torch.randn(shape, device='cpu', requires_grad=True)
    x_mps = x_cpu.clone().detach().to('mps').requires_grad_(True)
    model = torch.nn.ReplicationPad1d((1, 1))

    out_cpu = model(x_cpu)
    out_mps = model(x_mps)

    # backward
    g_cpu = torch.randn_like(out_cpu)
    g_mps = g_cpu.clone().detach().to('mps').requires_grad_(False)
    out_cpu.backward(g_cpu)
    out_mps.backward(g_mps)

    print(f"{((x_cpu.grad - x_mps.grad.cpu()).abs() > 1e-5).sum() = }")

    # Expected Output:
    # ((x_cpu.grad - x_mps.grad.cpu()).abs() > 1e-5).sum() = tensor(0)
```

Test case for nn.ReplicationPad2d,
```
    shape = [2, 65739, 2, 4]
    x_cpu = torch.randn(shape, device='cpu', requires_grad=True)
    x_mps = x_cpu.clone().detach().to('mps').requires_grad_(True)
    model = torch.nn.ReplicationPad2d((1, 1, 1, 1))

    out_cpu = model(x_cpu)
    out_mps = model(x_mps)

    # backward
    g_cpu = torch.randn_like(out_cpu)
    g_mps = g_cpu.clone().detach().to('mps').requires_grad_(False)
    out_cpu.backward(g_cpu)
    out_mps.backward(g_mps)

    print(f"{((x_cpu.grad - x_mps.grad.cpu()).abs() > 1e-5).sum() = }")

    # Expected Output:
    # ((x_cpu.grad - x_mps.grad.cpu()).abs() > 1e-5).sum() = tensor(0)
```

These tests produce expected output with this workaround.
@skotapati skotapati added the ciflow/mps Run MPS tests (subset of trunk) label May 14, 2025
Copy link
Contributor
@malfet malfet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please fix lint and also looks like it fails on exactly the test you are trying to add

// we break the tensor into chuncks where the problematic dimention is no greater than 2**16-1.
// This is reported in https://github.com/pytorch/pytorch/issues/135447.
// Internal radar for MPSGraph: rdar://149853787.
const int64_t max_sub_batch_size = 65535;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
const int64_t max_sub_batch_size = 65535;
constexpr auto max_sub_batch_size = 65535;

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @malfet for the comments. I will follow up on these issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/mps Run MPS tests (subset of trunk) open source release notes: mps Release notes category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[MPS] Correctness issue in backward pass of nn.ReplicationPad1d and nn.ReplicationPad2d
5 participants
0