-
Notifications
You must be signed in to change notification settings - Fork 628
Qualcomm AI Engine Direct - Enable custom operator #8726
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Qualcomm AI Engine Direct - Enable custom operator #8726
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/8726
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit de02835 with merge base c6c3616 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Hi @cccclai, This PR is to support custom kernel in QNN Backend. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The readme is nice, I will try to reproduce this. Thanks!
064d12f
to
5f65810
Compare
Apologies this is still pending. I will also help review this next week. |
|
I think it's fine for me. If I add a check for the namespace based on the current design, what are your thoughts? |
5f65810
to
eaf8aa6
Compare
@cccclai I have rebased this PR. Thanks! |
5b04add
to
3bbb5c0
Compare
@cccclai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Let me make clear for the following.
++ @haowhsu-quic aware |
When I checkout to 338393f8 in flatbuffers, I get the following error with ./backends/qualcomm/scripts/build.sh. Error logError while generating /local3/mnt/workspace/shewu/executorch/executorch_shewu/executorch/build-android/executorch_srcs.cmake. Exit code: 1 Output:Error: Traceback (most recent call last): The above exception was the direct cause of the following exception: Traceback (most recent call last): Caused by:
/buffer_ref.h", "flatbuffers/include/flatbuffers/code_generator.h", "flatbuffers/include/flatbuffers/default_allocator.h", "flatbuffers/include/flatbuffers/detached_buffer.h", "flatbuffers/include/flatbuffers/file_manager.h", "flat CMake Error at tools/cmake/Utils.cmake:109 (message): -- Configuring incomplete, errors occurred! |
Woops, I got it. This commit is very different from now. It is missing some header file... |
eff8673
to
a22cdd2
Compare
It seems flatbuffers::Vector only takes one template argument. I've made the adjustments. Could you please try again? |
@cccclai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
a22cdd2
to
f28ab0e
Compare
It seems that dataclass cannot set mutable value to a class attributes. |
@cccclai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
f28ab0e
to
655e416
Compare
Done. Thanks :) |
@cccclai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
1 similar comment
@cccclai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Summary: - Support to register op package in QNN Backend - Add example script to run torch custom op with QNN Op package - Allow op package override torch built-in operator - Add op package example - move test_custom_op to TestUtilScript - Modify the flag of dlopen for QNN library - Generate custom op based on the meta and _schema.arguments of torch.fx.Node - Add README for the custom op
655e416
to
de02835
Compare
@cccclai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
1 similar comment
@cccclai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Summary:
Reproduce commands: