8000 Fix test_ops for tiny backend by Anish9901 · Pull Request #9302 · tinygrad/tinygrad · GitHub
[go: up one dir, main page]

Skip to content

Fix test_ops for tiny backend #9302

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 101 commits into from
Apr 1, 2025
Merged
Changes from 1 commit
Commits
Show all changes
101 commits
Select commit Hold shift + click to select a range
d9418ce
fix some tests in test_ops for torch backend(171 failing)
Anish9901 Feb 26, 2025
204f2c5
merge master
Anish9901 Feb 27, 2025
cd704c0
fix more tests (135 failures)
Anish9901 Feb 27, 2025
f0461cf
fix tests (126 failing)
Anish9901 Feb 27, 2025
503bdc8
handle transposed convs (109 tests failing)
Anish9901 Feb 27, 2025
859f4c3
fix slice
Anish9901 Feb 27, 2025
221a841
fix lshift & rshift and more tests (87 tests failing)
Anish9901 Feb 27, 2025
077380a
merge master
Anish9901 Feb 27, 2025
6f5bd3e
revert accidental change
Anish9901 Feb 27, 2025
e1f2f83
merge master
Anish9901 Feb 28, 2025
121e317
remove unnecessary changes (82 failures)
Anish9901 Feb 28, 2025
4281f80
fix backward for avg_pool2d (78 failures)
Anish9901 Feb 28, 2025
41f59f1
fix backward for avg_pool2d (78 failures)
Anish9901 Feb 28, 2025
7428b1b
fix replication backpass
Anish9901 Feb 28, 2025
9082525
fix reflection pad back pass (71 failures)
Anish9901 Feb 28, 2025
180c4e3
Merge branch 'master' into test_ops_tiny_be
Anish9901 Feb 28, 2025
8000
d429ab9
cummax with indicies, aten.mv and move out methods (67 failures)
Anish9901 Feb 28, 2025
200c43d
extract avg_pool2d and avg_pool3d to separate functions (62 failures)
Anish9901 Feb 28, 2025
31cd144
revert changes for cat_out
Anish9901 Feb 28, 2025
d15cc92
rewrite avg_pool and pad without repetition
Anish9901 Mar 1, 2025
da824fa
Merge branch 'master' into test_ops_tiny_be
Anish9901 Mar 1, 2025
2ca2726
remove duplicates from decomps
Anish9901 Mar 1, 2025
3e48de8
slice rewrite and add slice_backward (59 failures)
Anish9901 Mar 1, 2025
cc760e0
add dtype fixup from https://github.com/tinygrad/tinygrad/pull/9297
Anish9901 Mar 1, 2025
f3afa0b
Merge branch 'master' into test_ops_tiny_be
Anish9901 Mar 1, 2025
eec2e60
fix linter error and remove Tensor.pad (48 failures)
Anish9901 Mar 1, 2025
1e63367
add select_backward and index_put (40 failures)
Anish9901 Mar 1, 2025
a5b4976
fix some more tests (36 failures)
Anish9901 Mar 1, 2025
eac9c78
fix more tests (12 failures)
Anish9901 Mar 2, 2025
7feb0b9
some cleanups and fix couple more tests (10 failures)
Anish9901 Mar 2, 2025
8e17a94
Merge branch 'master' into test_ops_tiny_be
Anish9901 Mar 2, 2025
93d97ad
cleaner way to write upsample
Anish9901 Mar 2, 2025
be56b3d
some more upsample cleanups
Anish9901 Mar 3, 2025
4452f05
use lambda for upsample
Anish9901 Mar 3, 2025
03a8237
Merge branch 'master' into test_ops_tiny_be
Anish9901 Mar 3, 2025
897d83b
add autowrapper for upsample forward
Anish9901 Mar 3, 2025
b0d0af7
cumsum and max_dim without aten functions
Anish9901 Mar 3, 2025
c838601
Merge branch 'master' into test_ops_tiny_be
Anish9901 Mar 3, 2025
b316198
Merge branch 'master' into test_ops_tiny_be
Anish9901 Mar 3, 2025
eb9d7b7
revert _log_softmax
Anish9901 Mar 3, 2025
d959d95
fix more tests (1 failure)
Anish9901 Mar 3, 2025
c62c0fe
Merge branch 'master' into test_ops_tiny_be
Anish9901 Mar 3, 2025
b65db9b
make linter happy
Anish9901 Mar 3, 2025
75c7993
move import to appropriate func
Anish9901 Mar 3, 2025
0cdb41c
make linter happy
Anish9901 Mar 4, 2025
376a13d
add codes for noqa
Anish9901 Mar 4, 2025
9b297e9
Merge branch 'master' into test_ops_tiny_be
Anish9901 Mar 4, 2025
1e4d868
some more refactors
Anish9901 Mar 4, 2025
2b2ff69
remove comment
Anish9901 Mar 4, 2025
f57803b
remove dependency on aten function for conv backward
Anish9901 Mar 4, 2025
a1f1fd6
some more refactors
Anish9901 Mar 4, 2025
e88bb9d
add returns
Anish9901 Mar 4, 2025
0167fd5
Merge branch 'master' into test_ops_tiny_be
Anish9901 Mar 5, 2025
e1bf597
Merge branch 'master' into test_ops_tiny_be
Anish9901 Mar 5, 2025
c8a7813
revert a change from merge
Anish9901 Mar 5, 2025
090845c
some cleanups
Anish9901 Mar 5, 2025
cd0ad8e
remove whitespace
Anish9901 Mar 5, 2025
96ea963
remove ruff change
Anish9901 Mar 5, 2025
a9f2808
8000 Merge branch 'master' into test_ops_tiny_be
Anish9901 Mar 11, 2025
01bbff1
revert upsample
Anish9901 Mar 11, 2025
c4e2ac4
Merge branch 'master' into test_ops_tiny_be
Anish9901 Mar 12, 2025
674b35b
add masked_fill_.Tensor and scatter.src_out
Anish9901 Mar 12, 2025
ac505fd
add todo
Anish9901 Mar 12, 2025
d285d79
Merge branch 'master' into test_ops_tiny_be
Anish9901 Mar 13, 2025
cd89c25
Merge branch 'master' into test_ops_tiny_be
Anish9901 Mar 14, 2025
4cf6f7a
fix test_biased_conv2d
Anish9901 Mar 15, 2025
701e216
fix test_var_one_in_axis & test_std_one_in_axis but break test_biased…
Anish9901 Mar 15, 2025
53205ed
revert torch_debug
Anish9901 Mar 16, 2025
98bc8a2
revert torch_debug
Anish9901 Mar 16, 2025
611d302
skip test_gather_failure for the tiny backend
Anish9901 Mar 17, 2025
68448c7
make padding registration more consise
Anish9901 Mar 17, 2025
a7d41d2
add nonzero
Anish9901 Mar 17, 2025
e352b44
remove scatter_add since we already have the out
Anish9901 Mar 17, 2025
ae3f35a
fix scatter
Anish9901 Mar 17, 2025
9d969a2
remove some repetition
Anish9901 Mar 17, 2025
2e0dd3f
make upsample backward registrations more concise
Anish9901 Mar 17, 2025
bb8df79
remove select.int
Anish9901 Mar 17, 2025
384dab1
Merge branch 'master' into test_ops_tiny_be
Anish9901 Mar 17, 2025
7f47846
Merge branch 'master' into test_ops_tiny_be
Anish9901 Mar 17, 2025
4d384f0
Merge branch 'master' into test_ops_tiny_be
Anish9901 Mar 19, 2025
9495e23
Merge branch 'master' into test_ops_tiny_be
Anish9901 Mar 19, 2025
1e7f648
use Tensor.cumsum
Anish9901 Mar 19, 2025
6ac2b97
Merge branch 'master' into test_ops_tiny_be
Anish9901 Mar 20, 2025
4781339
realize conv2d outputs before backward to fix test_biased_conv2d
Anish9901 Mar 20, 2025
c535b5c
add a todo for realize(1 failure)
Anish9901 Mar 20, 2025
1e87791
Merge branch 'master' into test_ops_tiny_be
chenyuxyz Mar 21, 2025
7db0aa9
add new_empty and new_empty_strided
Anish9901 Mar 21, 2025
f1f6a6a
Merge branch 'master' into test_ops_tiny_be
Anish9901 Mar 22, 2025
889e1e8
make test_pad_circular_mode forward only and remove redundant stuff
Anish9901 Mar 23, 2025
4513874
fix linter errors
Anish9901 Mar 23, 2025
da178ca
remove expect failure
Anish9901 Mar 23, 2025
f431880
just tb
Anish9901 Mar 23, 2025
4cbdb59
Merge branch 'master' into test_ops_tiny_be
Anish9901 Mar 24, 2025
fb316eb
slice is a view_op
Anish9901 Mar 24, 2025
48e5035
contiguous only when lazydata.is_realized
Anish9901 Mar 24, 2025
eb4f1a8
fix backward for test_pad_circular_mode
Anish9901 Mar 26, 2025
05c97f0
Merge branch 'master' into test_ops_tiny_be
Anish9901 Mar 26, 2025
c36a190
revert torch.nn.functional.pad override
Anish9901 Mar 27, 2025
86e46d9
Merge branch 'master' into test_ops_tiny_be
Anish9901 Mar 31, 2025
72ee796
add transpose.int and make constant_pad_nd contiguous
Anish9901 Mar 31, 2025
0a865ad
slice_backwards has no kwargs
8000 Anish9901 Mar 31, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
extract avg_pool2d and avg_pool3d to separate functions (62 failures)
  • Loading branch information
Anish9901 committed Feb 28, 2025
commit 200c43dfd172897fe6e1b859d7a3b47d7059b021
12 changes: 10 additions & 2 deletions extra/torch_backend/backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -226,6 +226,16 @@ def cummax(self, dim):
def view_dtype(self, dtype):
return wrap(unwrap(self).bitcast(_from_torch_dtype(dtype)))

@torch.library.impl("aten::avg_pool2d", "privateuseone")
def avg_pool2d(self, kernel_size, stride=[], padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None):
if stride == []: stride = None
return wrap(unwrap(self).avg_pool2d(kernel_size, stride, padding=padding, ceil_mode=ceil_mode, count_include_pad=count_include_pad))

@torch.library.impl("aten::avg_pool3d", "privateuseone")
def avg_pool3d(self, kernel_size, stride=[], padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None):
if stride == []: stride = None
return wrap(unwrap(self).avg_pool2d(kernel_size, stride, padding=padding, ceil_mode=ceil_mode, count_include_pad=count_include_pad))

@torch.library.impl("aten::_copy_from", "privateuseone")
def _copy_from(src, dest, non_blocking=False):
if str(src.device) == "tiny" and str(dest.device) == "tiny":
Expand Down Expand Up @@ -440,8 +450,6 @@ def _wrap_out(*args, **kwargs):
"aten.scatter_add": lambda self, dim, index, src: Tensor.scatter_reduce(self, dim, index, src, reduce='sum'),
"aten.scatter_add.out": lambda self, dim, index, src, out: out.replace(Tensor.scatter_reduce(self, dim, index, src, reduce='sum')),
"aten.scatter_reduce.two": lambda self, dim, index, src, reduce, include_self=True: Tensor.scatter_reduce(self, dim, index, src, reduce=reduce, include_self=include_self),
"aten.avg_pool2d": lambda self, kernel_size, stride=[], padding=0, ceil_mode=False, count_include_pad=True: Tensor.avg_pool2d(self, kernel_size, stride, padding=padding, ceil_mode=ceil_mode, count_include_pad=count_include_pad),
"aten.avg_pool3d": lambda self, kernel_size, stride=[], padding=0, ceil_mode=False, count_include_pad=True: Tensor.avg_pool2d(self, kernel_size, stride, padding=padding, ceil_mode=ceil_mode, count_include_pad=count_include_pad),
# fix backward for these
"aten.upsample_linear1d": lambda self, size, align_corners=False: Tensor.interpolate(self, size, mode="linear", align_corners=align_corners),
# "aten.upsample_linear1d_backward": lambda self, size, gradient, align_corners=False: Tensor.interpolate(self, size, mode="linear", align_corners=align_corners).backward(Tensor(gradient)),
Expand Down
Loading
0