8000 Update on "Better graph break msg on Dynamo x Python C++ extension" · pytorch/pytorch@870591d · GitHub
[go: up one dir, main page]

Skip to content

Commit 870591d

Browse files
committed
Update on "Better graph break msg on Dynamo x Python C++ extension"
Dynamo graph breaks on Python C/C++ extensions (e.g. pybinded functions). The usual way to handle this is to turn those extensions into custom ops. This PR adds a nice error message for that. Fixes #126799 Test Plan: - new test [ghstack-poisoned]
2 parents 08c894b + ea2e612 commit 870591d

File tree

3 files changed

+77
-10
lines changed

3 files changed

+77
-10
lines changed

docs/source/library.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,5 @@
1+
.. _torch-library-docs:
2+
13
torch.library
24
===================================
35
.. py:module:: torch.library
Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
.. _custom-ops-landing-page:
2+
3+
PyTorch Custom Operators Landing Page
4+
=====================================
5+
6+
PyTorch offers a large library of operators that work on Tensors (e.g. :func:`torch.add`,
7+
:func:`torch.sum`, etc). However, you may wish to bring a new custom operation to PyTorch
8+
and have it behave like PyTorch's built-in operators. In order to do so, you must
9+
register the custom operation with PyTorch via the Python :ref:`torch-library-docs` or C++ TORCH_LIBRARY
10+
APIs.
11+
12+
TL;DR
13+
-----
14+
15+
How do I author a custom op from Python?
16+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
17+
18+
..
19+
[comment] TODO(rzou): The following will be a link to a tutorial on the PyTorch tutorials site in 2.4
20+
21+
Please see the `Python Custom Operators tutorial <https://colab.research.google.com/drive/1xCh5BNHxGnutqGLMHaHwm47cbDL9CB1g#scrollTo=gg6WorNtKzeh>`_
22+
23+
How do I black-box a Python function for use with torch.compile?
24+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
25+
26+
..
27+
[comment] TODO(rzou): The following will be a link to a tutorial on the PyTorch tutorials site in 2.4
28+
29+
Please see the `Python Custom Operators tutorial <https://colab.research.google.com/drive/1xCh5BNHxGnutqGLMHaHwm47cbDL9CB1g#scrollTo=gg6WorNtKzeh>`_
30+
31+
How do I integrate custom C++ and/or CUDA code with PyTorch?
32+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
33+
34+
..
35+
[comment] TODO(rzou): The following will be a link to a tutorial on the PyTorch tutorials site in 2.4
36+
37+
Please see the `Custom C++ and CUDA Operators tutorial <https://docs.google.com/document/d/1-LdJZBzlxiF0Tm-8NfbyFvRJaofdwRgLcycXGmlIpS0/edit>`_
38+
39+
40+
For more details
41+
^^^^^^^^^^^^^^^^
42+
43+
Please see `The Custom Operators Manual (gdoc) <https://docs.google.com/document/d/1-LdJZBzlxiF0Tm-8NfbyFvRJaofdwRgLcycXGmlIpS0/edit>`_
44+
(we're working on moving the information to our docs site). We recommend that you
45+
first read one of the tutorials above and then use the Custom Operators Manual as a reference;
46+
it is not meant to be read head to toe.
47+
48+
When should I create a Custom Operator?
49+
---------------------------------------
50+
If your operation is expressible as a composition of built-in PyTorch operators
51+
then please write it as a Python function and call it instead of creating a
52+
custom operator. Use the operator registration APIs to create a custom op if you
53+
are calling into some library that PyTorch doesn't understand (e.g. custom C/C++ code,
54+
a custom CUDA kernel, or Python bindings to C/C++/CUDA extensions).
55+
56+
Why should I create a Custom Operator?
57+
--------------------------------------
58+
59+
It is possible to use a C/C++/CUDA kernel by grabbing a Tensor's data pointer
60+
and passing it to a pybind'ed kernel. However, this approach doesn't compose with
61+
PyTorch subsystems like autograd, torch.compile, vmap, and more. In order
62+
for an operation to compose with PyTorch subsystems, it must be registered
63+
via the operator registration APIs.

docs/source/notes/extending.rst

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,18 @@ Extending PyTorch
44
In this note we'll cover ways of extending :mod:`torch.nn`,
55
:mod:`torch.autograd`, :mod:`torch`, and writing custom C++ extensions.
66

7+
Adding new operators
8+
--------------------
9+
10+
PyTorch offers a large library of operators that work on Tensors (e.g. :func:`torch.add`,
11+
:func:`torch.sum`, etc). However, you may wish to bring a new custom operation to PyTorch
12+
and have it behave like PyTorch's built-in operators. In order to do so, you must
13+
register the custom operation with PyTorch via the Python :ref:`torch-library-docs` or C++ TORCH_LIBRARY
14+
APIs.
15+
16+
17+
Please see :ref:`custom-ops-landing-page` for more details.
18+
719
.. _extending-autograd:
820

921
Extending :mod:`torch.autograd`
@@ -968,13 +980,3 @@ Which prints the following, with extra comments::
968980
Dispatch Log: aten.mul.Tensor(*(tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]), 2), **{})
969981
Dispatch Log: aten.detach.default(*(tensor([2., 2., 2., 2., 2., 2., 2., 2., 2., 2.]),), **{})
970982
Dispatch Log: aten.detach.default(*(tensor([2., 2., 2., 2., 2., 2., 2., 2., 2., 2.]),), **{})
971-
972-
973-
Writing custom C++ extensions
974-
-----------------------------
975-
976-
See this
977-
`PyTorch tutorial <https://pytorch.org/tutorials/advanced/cpp_extension.html>`_
978-
for a detailed explanation and examples.
979-
980-
Documentations are available at :doc:`../cpp_extension`.

0 commit comments

Comments
 (0)
0