8000 [primTorch] Implement NLL loss reference by rdspring1 · Pull Request #81128 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content
Closed
Changes from 1 commit
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
2256d36
Initial nll_loss implementation
rdspring1 Jun 29, 2022
b1393e5
fixup
rdspring1 Jun 29, 2022
f72db25
Disable validate_view_consistency check
rdspring1 Jun 29, 2022
055e0e2
Merge 1d and 2d nll_loss functions
rdspring1 Jun 29, 2022
96cc303
Add target class check - disabled because of FakeTensor
rdspring1 Jun 29, 2022
370bc60
refactor helper function
rdspring1 Jul 8, 2022
612ce91
Merge branch 'master' of github.com:rdspring1/pytorch into ref_nll_loss
rdspring1 Sep 25, 2022
e7a3ae4
Merge branch 'master' of github.com:rdspring1/pytorch into ref_nll_loss
rdspring1 Sep 27, 2022
44702b8
Address comments - rnd 1
rdspring1 Sep 27, 2022
c71d746
fixup
rdspring1 Sep 27, 2022
e0554f2
Refactor class weight selection
rdspring1 Sep 28, 2022
6aa6b62
Add comments
rdspring1 Sep 28, 2022
dde53e3
Replace 4-D case for image inputs with general 3-D case
rdspring1 Sep 28, 2022
4df9971
Merge branch 'master' of github.com:rdspring1/pytorch into ref_nll_loss
rdspring1 Sep 28, 2022
39883b6
add comments
rdspring1 Sep 28, 2022
1a635cd
Merge branch 'master' of github.com:rdspring1/pytorch into ref_nll_loss
rdspring1 Sep 28, 2022
590866b
Add class check
rdspring1 Sep 28, 2022
1b88f57
Add FakeTensor Issue
rdspring1 Sep 28, 2022
c59279e
add zero-dim check
rdspring1 Sep 28, 2022
e6d01e4
Merge branch 'master' of github.com:rdspring1/pytorch into ref_nll_loss
rdspring1 Sep 30, 2022
f2c9c3f
Update comments
rdspring1 Sep 30, 2022
10b85ff
fixup
rdspring1 Sep 30, 2022
96a6142
Merge branch 'master' of github.com:rdspring1/pytorch into ref_nll_loss
rdspring1 Oct 3, 2022
6cbdf01
lint
rdspring1 Oct 3, 2022
746a60e
Merge branch 'master' of github.com:rdspring1/pytorch into ref_nll_loss
rdspring1 Oct 11, 2022
e1eb641
Merge branch 'master' of github.com:rdspring1/pytorch into ref_nll_loss
rdspring1 Oct 16, 2022
ef5719e
PR comments
rdspring1 Oct 16, 2022
76bfc80
update test args
rdspring1 Oct 16, 2022
3cd82ab
add type promotion wrapper
rdspring1 Oct 17, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
PR comments
  • Loading branch information
rdspring1 committed Oct 16, 2022
commit ef5719ee09ca8b27ef96d41391f694ee969cc4b3
9 changes: 6 additions & 3 deletions torch/_refs/nn/functional/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -459,18 +459,20 @@ def _nll_loss_nd(
flat_target = torch.flatten(target)
ignore_classes_mask = torch.eq(flat_target, ignore_index)

# TODO: Enable data-dependent checks with debug mode
# TODO: This check does not work with FakeTensor inputs; See Issue #85834
# Explicit cast for class_check to bool; See Issue #78071
"""
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These data-dependent checks are a pain.

Would you file an issue so we can discuss this as a group?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I opened issue #85834.

num_classes = input.shape[1] if input.ndim > 1 else input.shape[0]
valid_classes_mask = torch.logical_and(
(flat_target >= 0), (flat_target < num_classes)
)
class_check = torch.all(torch.logical_or(ignore_classes_mask, valid_classes_mask))

# TODO: This check does not work with FakeTensor inputs; See Issue #85834
# Explicit cast for class_check to bool; See Issue #78071
utils.check(
isinstance(target, FakeTensor) or bool(class_check.item()),
lambda: "A target class is out-of-bounds and not the ignore index.",
)
"""

ignore_class_weight = torch.scalar_tensor(0, dtype=input.dtype, device=input.device)
class_weight = (
Expand Down Expand Up @@ -514,6 +516,7 @@ def _nll_loss_nd(


@register_decomposition(torch.ops.aten.nll_loss)
@out_wrapper()
def nll_loss(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

try wrapping with type promotion decorator

input: TensorLikeType,
target: TensorLikeType,
Expand Down
0