You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Inductor to fail gracefully on Voltas for bf16 tensors
Voltas do not have a HW support for bfloat16 datatype, but this type is emulated in software, so PyTorch eager can use bfloat16 tensors, but not Triton
So if graph with either CUDA bf16 input or output tensors is used, raise warning and skip the frame
Fixes#118122 and #118581
0 commit comments