8000 Back out "Back out "[const_fold] Set requires_grad based on the folde… · pytorch/pytorch@d4bbebb · GitHub
[go: up one dir, main page]

Skip to content

Commit d4bbebb

Browse files
jfix71pytorchmergebot
authored andcommitted
Back out "Back out "[const_fold] Set requires_grad based on the folded tensor; add device_for_folding option"" (#79696)
Summary: This is an un-backout but with a small change to set the default device `device_for_folded_attrs="cuda"` instead of `"cpu"`, which should avoid BC issues for TRT lowering. Original commit changeset: 4ae1863e28ff Original Phabricator Diff: D37192230 (24c2aff) Test Plan: Added unit test Differential Revision: D37205432 Pull Request resolved: #79696 Approved by: https://github.com/dborkovic
1 parent 0545c85 commit d4bbebb

File tree

2 files changed

+4
-2
lines changed

2 files changed

+4
-2
lines changed

test/fx/test_fx_const_fold.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -693,7 +693,9 @@ def forward(self, x, y):
693693
gm = torch.fx.symbolic_trace(mod)
694694
in_x, in_y = torch.tensor([[-0.45]]), torch.tensor([0.9])
695695
ShapeProp(gm).propagate(in_x, in_y)
696-
mod_folded: const_fold.FoldedGraphModule = const_fold.split_const_subgraphs(gm)
696+
mod_folded: const_fold.FoldedGraphModule = const_fold.split_const_subgraphs(
697+
gm, device_for_folded_attrs="cpu"
698+
)
697699
self._verify_const_fold_mod(mod_folded)
698700

699701
mod_folded.run_folding()

torch/fx/experimental/const_fold.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ def __init__(
2222
graph: torch.fx.Graph,
2323
const_subgraph: Optional[torch.fx.Graph] = None,
2424
fx_const_folded_attrs_name: str = None,
25-
device_for_folded_attrs: str = "cpu",
25+
device_for_folded_attrs: str = "cuda",
2626
):
2727
# In init, we set graph's owning module to root which will make graph's
2828
# owning module be None because graph already have a owning module. We

0 commit comments

Comments
 (0)
0