8000 Integrated AMD AWS runners into Pytorch CI by charan-ponnada · Pull Request #153704 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

Integrated AMD AWS runners into Pytorch CI #153704

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

charan-ponnada
Copy link
@charan-ponnada charan-ponnada commented May 16, 2025

Integrated AMD AWS runners into PyTorch CI, including the linux.24xl.amd for performance tests, the linux.8xl.amd with AVX512 support for unit and periodic tests, and the linux.12xl.amd with AVX2 support for unit and periodic tests.

Fixes #ISSUE_NUMBER

@charan-ponnada charan-ponnada requested a review from a team as a code owner May 16, 2025 11:56
Copy link
pytorch-bot bot commented May 16, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/153704

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 4ad430a with merge base 3443627 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Copy link
linux-foundation-easycla bot commented May 16, 2025

CLA Signed

The committers listed above are authorized under a signed CLA.

  • ✅ login: charan-ponnada (4ad430a)

@pytorch-bot pytorch-bot bot added the topic: not user facing topic category label May 16, 2025
@charan-ponnada charan-ponnada force-pushed the chponnad_fork branch 2 times, most recently from 12d181b to fd3fe40 Compare May 16, 2025 12:29
Copy link
@amukho amukho left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@charan-ponnada The linux.12xlarge.amd runner has the AVX2 support and the linux.8xlarge.amd runner has the AVX512 support. Can you please update the commit message in the PR?

{ config: "dynamic_cpu_inductor_timm", shard: 2, num_shards: 2, runner: "linux.8xlarge.amd" },
{ config: "dynamic_cpu_inductor_torchbench", shard: 1, num_shards: 2, runner: "linux.8xlarge.amd" },
{ config: "dynamic_cpu_inductor_torchbench", shard: 2, num_shards: 2, runner: "linux.8xlarge.amd" },
{ config: "inductor_torchbench_cpu_smoketest_perf", shard: 1, num_shards: 1, runner: "linux.24xl.amd" },
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please update runner name to linux.24xlarge.amd

Copy link
Contributor
@malfet malfet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please elaborate how those results are going to be reported? And who will be looking into those results

IMO more logical first step would be to just run one dashboard with few benchmarks that important to the team and than you plan to make improvements on

@jcaip jcaip added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label May 16, 2025
Integrated AMD AWS runners into PyTorch CI, including the linux.24xl.amd for performance tests,
the linux.8xl.amd with AVX512 support for unit and periodic tests, and the linux.12xl.amd
with AVX2 support for unit and periodic tests.

Co-authored-by: charan-ponnada <charan.ponnada@amd.com>
Co-authored-by: kiriti-pendyala <kiriti.pendyala@amd.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
open source topic: not user facing topic category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants
0