8000 [dtensor][fix] fix _scaled_dot_product_flash_attention sharding by XilunWu · Pull Request #148125 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

[dtensor][fix] fix _scaled_dot_product_flash_attention sharding#148125

Closed
XilunWu wants to merge 2 commits intogh/XilunWu/120/basefrom
gh/XilunWu/120/head
Closed

[dtensor][fix] fix _scaled_dot_product_flash_attention sharding#148125
XilunWu wants to merge 2 commits intogh/XilunWu/120/basefrom
gh/XilunWu/120/head

Commits

Commits on Feb 27, 2025

Commits on Feb 28, 2025

0