8000 Fix test_ops for tiny backend by Anish9901 · Pull Request #9302 · tinygrad/tinygrad · GitHub
[go: up one dir, main page]

Skip to content

Fix test_ops for tiny backend #9302

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 101 commits into from
Apr 1, 2025
Merged

Conversation

Anish9901
Copy link
Contributor
@Anish9901 Anish9901 commented Feb 28, 2025

With LLVM=1 LLVMOPT=0 TINY_BACKEND=1 python3 -m pytest -n auto test/test_ops.py

Test status on master (149 failures):

Screenshot 2025-03-01 at 1 36 03 AM

Test status on this PR (All passing):

Screenshot 2025-04-01 at 12 01 40 AM

Copy link
Contributor

This branch currently is behind tinygrad/master. The line count difference bot is disabled.

@chenyuxyz
Copy link
Collaborator
chenyuxyz commented Feb 28, 2025

locked to you since if you have done more than half of these

also it needs to work correctly in principle, not just hacking to make the test pass

out = Tensor.avg_pool2d(self, kernel_size, stride, dilation=1, padding=padding, ceil_mode=ceil_mode, count_include_pad=count_include_pad)
return wrap(out.gradient(self, gradient=grad_out)[0])

@torch.library.impl("aten::replication_pad1d_backward", "privateuseone")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't copy paste!

@chenyuxyz
Copy link
Collaborator

feels like there something subtly wrong. if i add contiguous and make it return wrap(unwrap(self)[slices].contiguous()) in slice_tensor, it ran but gave wrong output

@Anish9901
Copy link
Contributor Author

Yeah, if I make the expand contiguous instead of slice_tensor something like "aten.expand": lambda self, size, implicit=False: self.expand(size).contiguous(), it runs further than using contiguous in slice_tensor but still gives a wrong answer.

Copy link
Collaborator
@chenyuxyz chenyuxyz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cool, the realize and contiguous need to be understood though

@Anish9901 Anish9901 marked this pull request as ready for review March 24, 2025 19:07
@Anish9901 Anish9901 req A3E2 uested a review from chenyuxyz March 26, 2025 22:36
Copy link
Collaborator
@chenyuxyz chenyuxyz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what it will take to get it right, but not figuring it out just makes iteration more difficult later. There's clearly a bug somewhere

if you don't think you will be able to fix it, feel free to asking around on discord and you can figure out how to split the bounty

@tocubed
Copy link
Contributor
tocubed commented Mar 27, 2025

Spent some time debugging the test_pad_circular_mode failure. Some observations:

  • The incorrect offset passed to as_strided does seem to trace to storage_offset not being set, but I looked for something in XLA for the same and didn't find it. So it seemed odd that we would need it.
  • Using XLA torch device works for test_pad_circular_mode (although in general XLA fails a bunch of tests:)
/tinygrad$ LLVM=1 XLA_BACKEND=1 TORCH_DEBUG=1 pytest -n auto -vv test/test_ops.py
================================= test session starts =================================
.... truncated ....
FAILED test/test_ops.py::TestOps::test_max_pool2d_bigger_stride_dilation
FAILED test/test_ops.py::TestOps::test_or
FAILED test/test_ops.py::TestOps::test_pad_replicate_mode
FAILED test/test_ops.py::TestOps::test_xor
==== 37 failed, 339 passed, 6 skipped, 2 xfailed, 5 warnings in 181.83s (0:03:01) =====
  • XLA also fails the test (wrong values) if I enable the TorchDispatchMode log in order to trace the dispatches. This seems very odd since it doesn't really do anything, I'm not even printing the tensors.

@chenyuxyz
Copy link
Collaborator

fine to merge other non-controversial changes so that this does not go stale

@Anish9901 Anish9901 requested a review from chenyuxyz March 31, 2025 18:57
@chenyuxyz chenyuxyz merged commit a1ee4d5 into tinygrad:master Apr 1, 2025
32 checks passed
@chenyuxyz
Copy link
Collaborator

congrats! bounty is yours. Can pay via PayPal or USDC on ETH, e-mail george@tinygrad.org to claim.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bounty locked Bounty is locked to someone
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants
0