8000 Update FlexAttention TMA usage to TensorDescriptor when we bump Triton and remove BlockPtr Usage · Issue #153678 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content
Update FlexAttention TMA usage to TensorDescriptor when we bump Triton and remove BlockPtr Usage #153678
@drisspg

Description

@drisspg

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: flex attentionmodule: higher order operatorstorch.cond and similarmodule: pt2-dispatcherPT2 dispatcher-related issues (e.g., aotdispatch, functionalization, faketensor, custom-op,oncall: pt2triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleupstream tritonUpstream Triton Issue

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0