8000 use backend specializations in compile_and_call_fx_graph · pytorch/pytorch@9914ca3 · GitHub
[go: up one dir, main page]

Skip to content

Commit 9914ca3

Browse files
committed
use backend specializations in compile_and_call_fx_graph
ghstack-source-id: f0e905e Pull Request resolved: #152601
1 parent 4157a24 commit 9914ca3

File tree

1 file changed

+4
-1
lines changed

1 file changed

+4
-1
lines changed

torch/_dynamo/output_graph.py

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1496,8 +1496,11 @@ def compile_and_call_fx_graph(self, tx, rv, root):
14961496
# a lot of fake_tensor ownership assumptions and runs afoul of detect_fake_mode
14971497
self.tracing_context.fake_mode = backend_fake_mode
14981498

1499+
compiled_fns = []
14991500
with self.restore_global_state():
1500-
compiled_fn = self.call_user_compiler(gm)
1501+
for specialization in backend_specializations
1502+
modified_gm = specialize(gm, specialization)
1503+
compiled_fns.append(self.call_user_compiler(modified_gm))
15011504

15021505
from torch.fx._lazy_graph_module import _LazyGraphModule
15031506

0 commit comments

Comments
 (0)
0