You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update the heuristic for AArch64 bmm/baddbmm (#149122)
Updates heuristic for bmm/baddbmm and consolidates all heuristic logic in a single location
- The goal of the consolidation is to improve maintainability and readability of the heuristic logic. Instead of different parts scattered across two files, this patch centralizes everything inside `Matmul.cpp`, where there already exists heuristic-based selection for mkldnn.
- The logic of the check itself doesn't change (existing code is reused where possible) but a separate heuristic threshold for bmm/baddbmm is introduced based on newer, benchmarking data. Use the script below to see the performance improvement for bmm from the new heuristic:
```
import torch
import time
# Set below to True to use cases selected by only one of the hueristics.
USE_ONLY_DIVERGENT_TEST_CASES = True
BATCH_SIZES = [ 1, 8, 32, 64, 128, 256 ]
M_DIMS = [ 4, 8, 16, 32, 64, 256, 512 ]
N_DIMS = [ 4, 8, 16, 32, 64, 256, 512 ]
K_DIMS = [ 4, 8, 16, 32, 64, 256, 512 ]
ITERS = 50
def old_heuristic(m, n, k):
is_above_min_dims = m > 8 and n > 8 and k > 8
is_above_min_size = m*n*k > 8_192
return is_above_min_dims and is_above_min_size
def new_heuristic(b, m, n, k):
return b*b*m*n*k >= 4_194_304
def generate_test_cases():
test_cases = []
for b in BATCH_SIZES:
for m in M_DIMS:
for n in N_DIMS:
for k in K_DIMS:
if USE_ONLY_DIVERGENT_TEST_CASES:
if old_heuristic(m, n, k) != new_heuristic(b, m, n, k):
test_cases.append([b, m, n, k])
else:
test_cases.append([b, m, n, k])
return test_cases
def test(x, y):
for _ in range(5):
torch.bmm(x, y)
perf = 0.0
for _ in range(ITERS):
start = time.time()
torch.bmm(x, y)
end = time.time()
perf += (end - start) / ITERS
return perf
def main():
print(f"{'b':<10}{'m':<10}{'n':<10}{'k':<10}{'time (s)':10}")
cumulative_mean_time = 0.0
for b, m, n, k in generate_test_cases():
mean_time = test(torch.rand(b, m, n), torch.rand(b, n, k))
cumulative_mean_time += mean_time
print(f"{b:<10}{m:<10}{n:<10}{k:<10}{mean_time:10.3e}")
print(f"Cumulative mean time = {cumulative_mean_time:.4f} s")
if __name__ == "__main__":
main()
```
From the script we see that cumulative mean time from all test cases (at 16 threads) is:
- 1.6195 s for the old heuristic
- 0.7012 s for the new heuristic
Pull Request resolved: #149122
Approved by: https://github.com/fadara01, https://github.com/aditew01, https://github.com/malfet
0 commit comments