8000 [ROCM] Properly disable Flash Attention/Efficient Attention with environment variables by xinyazhang · Pull Request #133866 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

[ROCM] Properly disable Flash Attention/Efficient Attention with environment variables #133866

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from
Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Remove the legacy code that sets USE_MEM_EFF_ATTENTION to OFF
  • Loading branch information
xinyazhang committed Aug 21, 2024
commit 298fee6f225a343a2e3ab3116dc5a57bd9b5eed6
3 changes: 0 additions & 3 deletions cmake/Dependencies.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -1103,9 +1103,6 @@ if(USE_ROCM)
message(STATUS "Disabling Kernel Assert for ROCm")
endif()

if(USE_CUDA)
caffe2_update_option(USE_MEM_EFF_ATTENTION OFF)
endif()
else()
caffe2_update_option(USE_ROCM OFF)
endif()
Expand Down
Loading
0