-
Notifications
You must be signed in to change notification settings - Fork 24.3k
Enable qint8 and quint8 add for AArch64 using ACL directly #146620
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/146620
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit a0d2046 with merge base 6c3492b ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
#pragma once | ||
|
||
#include <ATen/Config.h> | ||
#if defined(__aarch64__) && AT_MKLDNN_ACL_ENABLED() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why restrict it to aarch64?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the quick review!
Yes this is redundant, I have rebased on top of latest changes in #145942 addressing this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same generic feedback is to previous PR: why restrict it to aarch64? Shouldn't IS_ACL_ENABLE
be sufficient?
4a1385a
to
bb887be
Compare
Among many things, this version of ACL fixes the redundant declaration warning that we're blocked on in (pytorch#145942, pytorch#146620, 147337) and introduces better scheduling heuristics for GEMMs
Among many things, this version of ACL fixes the redundant declaration warning that we're blocked on in (#145942, #146620, #147337) and introduces better scheduling heuristics for GEMMs Fixes #ISSUE_NUMBER Pull Request resolved: #147454 Approved by: https://github.com/malfet
@pytorchbot label "arm priority" |
Among many things, this version of ACL fixes the redundant declaration warning that we're blocked on in (#145942, #146620, #147337) and introduces better scheduling heuristics for GEMMs Fixes #ISSUE_NUMBER Pull Request resolved: #147454 Approved by: https://github.com/malfet
Among many things, this version of ACL fixes the redundant declaration warning that we're blocked on in (#145942, #146620, #147337) and introduces better scheduling heuristics for GEMMs Fixes #ISSUE_NUMBER Pull Request resolved: #147454 Approved by: https://github.com/malfet
Among many things, this version of ACL fixes the redundant declaration warning that we're blocked on in (pytorch#145942, pytorch#146620, pytorch#147337) and introduces better scheduling heuristics for GEMMs Fixes #ISSUE_NUMBER Pull Request resolved: pytorch#147454 Approved by: https://github.com/malfet
@malfet, @digantdesai - Could you please give this another look? |
ACL is already built with PyTorch as a shared library when USE_MKLDNN_ACL is set. Currently, it is only used indirectly in ATen via oneDNN for AArch64 targets. However there are cases where it makes sense to utilize ACL directly without oneDNN as an intermediary - e.g. quantization. See pytorch#145942, pytorch#147337, pytorch#146620. This patch enables such use cases by exposing ACL to ATen
This enables a fast path for eager mode dynamic quantization for AArch64 through Arm Compute Library (ACL) directly. Context: PR pytorch#126687 enabled an optimized implementation for qlinear_dynamic for aarch64 through ideep → oneDNN → ACL which improved performance by ~10x compared to the previous implementation. However, the current qlinear_dynamic path (ideep → oneDNN → ACL) suffers from high overhead due to the API friction between the stateless oneDNN API and the stateful ACL low-precision GEMM (lowp_gemm) API - for example, ACL's lowp_gemm objects cache information like weights reduction or weights in optimized memory format which oneDNN does not allow due to its stateless nature. Hence, ACL currently runs a (redundant) sum of columns and pre-transposition (to the gemm kerne's optimal format) for each GEMM operation. This PR addresses the sub-optimalities above by integrating ACL directly with qlinear_dynamic. This approach yields an average speedup (averaged over context_lengths of 2^3 up to 2^9) of ~ 50% for bert-base-uncased, bert-large-uncased, roberta-base, distilbert-base-uncased with 16 threads on a Neoverse-V1 (with transformers==4.48). To achieve this we introduce PackedLinearWeightsACL (as a subclasses of PackedLinearWeightsOnednn ) with an implementation of qlinear_dynamic that uses ACL directly, while qlinear still follows the oneDNN path.
…tly. This enables a fast path for eager mode static quantization for AArch64 through Arm Compute Library (ACL) directly. PR pytorch#145942 addressed the high overhead in qlinear_dynamic on AArch64 (due to redundant weight pretranspositions and reductions) by enabling a path that calls ACL directly. This does the same thing but for (static) qlinear.
This enables qint8 and quint8 add for AArch64 through Arm Compute Library (ACL) directly. It's based on changes in PR pytorch#145942 which enables the use of ACL directly in ATen. Relative performance improvement using OMP_NUM_THREADS=1 is ~15x, using OMP_NUM_THREADS=32 it’s ~5.4x.
ACL is already built with PyTorch as a shared library when USE_MKLDNN_ACL is set. Currently, it is only used indirectly in ATen via oneDNN for AArch64 targets. However there are cases where it makes sense to utilize ACL directly without oneDNN as an intermediary - e.g. quantization. See #145942, #147337, #146620. This patch enables such use cases by exposing ACL to ATen ghstack-source-id: 266c621 Pull Request resolved: #148581
ACL is already built with PyTorch as a shared library when USE_MKLDNN_ACL is set. Currently, it is only used indirectly in ATen via oneDNN for AArch64 targets. However there are cases where it makes sense to utilize ACL directly without oneDNN as an intermediary - e.g. quantization. See #145942, #147337, #146620. This patch enables such use cases by exposing ACL to ATen Pull Request resolved: #148584 Approved by: https://github.com/malfet
Closing in favor of ghstack PR: #148653 which has all comments addressed |
This enables qint8 and quint8 add for AArch64 through Arm Compute Library (ACL) directly.
It’s based on changes in PR #145942 which enables the use of ACL directly in ATen.
Relative performance improvement using OMP_NUM_THREADS=1 is ~15x, using OMP_NUM_THREADS=32 it’s ~5.4x.
Script to benchmark quantised add performance:
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10