8000 JIT test suite has dependencies across tests · Issue #39463 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

JIT test suite has dependencies across tests #39463

8000
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
ezyang opened this issue Jun 3, 2020 · 2 comments
Open

JIT test suite has dependencies across tests #39463

ezyang opened this issue Jun 3, 2020 · 2 comments
Labels
oncall: jit Add this issue/PR to JIT oncall triage queue triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@ezyang
Copy link
Contributor
ezyang commented Jun 3, 2020

Steps to reproduce:

  1. Induce a bug by adding more type annotations. https://github.com/ezyang/pytorch/tree/poc/jit-bug is what I was working on when this happened
  2. Run these tests:
(/home/ezyang/local/pytorch-tmp-env) [ezyang@devvm066.ash0 ~/local/pytorch-tmp] python test/test_jit.py TestScript.test_circular_depend
ency
Couldn't download test skip set, leaving all tests enabled...
.
----------------------------------------------------------------------
Ran 1 test in 0.211s

OK
(/home/ezyang/local/pytorch-tmp-env) [ezyang@devvm066.ash0 ~/local/pytorch-tmp] python test/test_jit.py TestScript.test_attribute_in_init TestScript.test_circular_dependency
Couldn't download test skip set, leaving all tests enabled...
.F
======================================================================
FAIL: test_circular_dependency (__main__.TestScript)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test/test_jit.py", line 5303, in test_circular_dependency
    self.getExportImportCopy(C())
  File "/data/users/ezyang/pytorch-tmp/torch/jit/__init__.py", line 1581, in init_then_script
    original_init(self, *args, **kwargs)
  File "test/test_jit.py", line 5296, in __init__
    self.foo = torch.nn.Sequential(B())
  File "/data/users/ezyang/pytorch-tmp/torch/jit/__init__.py", line 1587, in init_then_script
    self.__dict__["_actual_script_module"] = torch.jit._recursive.create_script_module(self, make_stubs)
  File "/data/users/ezyang/pytorch-tmp/torch/jit/_recursive.py", line 305, in create_script_module
    concrete_type = concrete_type_store.get_or_create_concrete_type(nn_module)
  File "/data/users/ezyang/pytorch-tmp/torch/jit/_recursive.py", line 264, in get_or_create_concrete_type
    concrete_type_builder = infer_concrete_type_builder(nn_module)
  File "/data/users/ezyang/pytorch-tmp/torch/jit/_recursive.py", line 128, in infer_concrete_type_builder
    assert attr_type.is_interface_type()
AssertionError

----------------------------------------------------------------------
Ran 2 tests in 0.152s

FAILED (failures=1)
(/home/ezyang/local/pytorch-tmp-env) [ezyang@devvm066.ash0 ~/local/pytorch-tmp] python test/test_jit.py TestScript.test_attribute_in_init
Couldn't download test skip set, leaving all tests enabled...
.
----------------------------------------------------------------------
Ran 1 test in 0.047s

OK

Individually the tests pass but together they fail.

cc @suo

@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Jun 3, 2020
@ezyang ezyang changed the title JIT test suite is nondeterministic JIT test suite has dependencies across tests Jun 3, 2020
@ezyang
Copy link
Contributor Author
ezyang commented Jun 3, 2020
@@ -1625,7 +1638,11 @@ if _enabled:
                 # This ensures that if we use the attr again in `__init__`, it
                 # will look like the actual value, not an instance of Attribute.
                 if isinstance(value, Attribute):
-                    if not hasattr(self, "__annotations__"):
+                    # NB: Ensure that we set __annotations__ on the specific
+                    # class in question, and not on a superclass (which would
+                    # be wrong wrong wrong!).
+                    # See also https://github.com/pytorch/pytorch/issues/39463
+                    if "__annotations__" not in self.__class__.__dict__:
                         self.__annotations__ = {}
                     self.__annotations__[attr] = value.type
                     value = value.value

seems to fix it

@SplitInfinity
Copy link

I've noticed that this can happen for other reasons too. For example, if you define some TorchScript classes and use them in more than one unit test (i.e. an instance method beginning with test_ in a subclass of JitTestCase, they will run individually but not together. But perhaps this deserves its own issue.

ezyang added a commit that referenced this issue Jun 8, 2020
… inline"


Just because the annotations are inline doesn't mean the files type
check; most of the newly annotated files have type errors and I
added exclusions for them in mypy.ini.  The payoff of moving
all of these modules inline is I can delete the relevant code
generation logic for the pyi files (which was added ignore
annotations that weren't actually relevant anymore.)
Because we aren't actually typechecking these modules in most
cases, it is inevitable that some of these type annotations are wrong.
I slavishly copied the old annotations from the pyi files unless there
was an obvious correction I could make.  These annotations will probably
need fixing up later.

Moving these annotations inline was really hairy because of interactions
with JIT, and also the fact that Python type erasure is a lie (inheriting
from Generic *does* change the behavior of your object). Here is
the list of things I had to fix and/or work around:

- The quantization translation passes previously barfed if the weight/bias arguments were inferred to be Optional. Previously, TorchScript type inference would have inferred that these arguments were non-Optional (because type inference happens after module construction), but accurate type annotations on these parameters override this inference process, causing the arguments to be optional. I fixed this by making the quantized operator signatures line up exactly with the non-quantized signatures, so we never change the types of the arguments. This change involved mostly making a bunch of quantized kernels take optional, and then error if they were passed nullopt. (You can have any color you like, as long as it's non-null.)
- I removed Generic support for Module and ModuleList. The intentions behind this were admirable, but making Module inherit from Generic ended up being more headache than it was worth. First, in Python 3.6 and earlier, Generic has a nontrivial metaclass, which means all subsequent metaclass shenanigans (e.g., ScriptModule) need to line up the right metaclass. Second, Generic defines `__new__` specially, which means that `inspect.signature` doesn't work (see https://bugs.python.org/issue40897), and I found a case of people using precisely this in the wild. Between these two problems, and also the general problem which is that the parametrization here is an incomplete fix (parametrization helps with output typing, but it doesn't solve problems with input typing (and with mypy as it stands this is unfixable, see python/mypy#3028) I decided to just eliminate Module generics entirely. We still apply the Callable trick so that subclasses of Module don't cause mypy to complain, but otherwise you are on your own for getting accurate type information out of Modules.
- The `Callable` trick on `forward` caused TorchScript to stop performing inference on the forward body, which is bad because in general we can only figure out the most accurate type by doing TorchScript inference. I added a special case to `infer_type` to ensure we always do inference for `Module.forward`, even if it is annotated (which it is), and another special case to make sure we ignore references to Callable (which we shouldn't process) recursively.
- When `__annotations__` is set on a class (as is the case when you add type annotations), JIT will incorrectly add further annotations to the parent class. This PR fixes #39463 by testing if `__annotations__` is defined on the specific class, excluding parent classes from the test.
- Added a missing fake source range to the invocation of `get_signature`
- In some cases, we cannot provide accurate typing for parameters on modules. This usually occurs when you have an `Optional[Tensor]` parameter, whose optional-ness is determined at `__init__` time. Without the annotation, TorchScript will infer the correct refined type depending on arguments to the constructor, but with the annotation, it will never do a refinement at `__init__` time, and you'll end up with the wrong type. I ended up just straight up deleting type annotations in all of these cases. A more robust fix might be to make some way to force TorchScript to do inference even if there is an explicit annotation, in case of refinement.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D21497397](https://our.internmc.facebook.com/intern/diff/D21497397)

[ghstack-poisoned]
ezyang added a commit that referenced this issue Jun 8, 2020
Just because the annotations are inline doesn't mean the files type
check; most of the newly annotated files have type errors and I
added exclusions for them in mypy.ini.  The payoff of moving
all of these modules inline is I can delete the relevant code
generation logic for the pyi files (which was added ignore
annotations that weren't actually relevant anymore.)
Because we aren't actually typechecking these modules in most
cases, it is inevitable that some of these type annotations are wrong.
I slavishly copied the old annotations from the pyi files unless there
was an obvious correction I could make.  These annotations will probably
need fixing up later.

Moving these annotations inline was really hairy because of interactions
with JIT, and also the fact that Python type erasure is a lie (inheriting
from Generic *does* change the behavior of your object). Here is
the list of things I had to fix and/or work around:

- The quantization translation passes previously barfed if the weight/bias arguments were inferred to be Optional. Previously, TorchScript type inference would have inferred that these arguments were non-Optional (because type inference happens after module construction), but accurate type annotations on these parameters override this inference process, causing the arguments to be optional. I fixed this by making the quantized operator signatures line up exactly with the non-quantized signatures, so we never change the types of the arguments. This change involved mostly making a bunch of quantized kernels take optional, and then error if they were passed nullopt. (You can have any color you like, as long as it's non-null.)
- I removed Generic support for Module and ModuleList. The intentions behind this were admirable, but making Module inherit from Generic ended up being more headache than it was worth. First, in Python 3.6 and earlier, Generic has a nontrivial metaclass, which means all subsequent metaclass shenanigans (e.g., ScriptModule) need to line up the right metaclass. Second, Generic defines `__new__` specially, which means that `inspect.signature` doesn't work (see https://bugs.python.org/issue40897), and I found a case of people using precisely this in the wild. Between these two problems, and also the general problem which is that the parametrization here is an incomplete fix (parametrization helps with output typing, but it doesn't solve problems with input typing (and with mypy as it stands this is unfixable, see python/mypy#3028) I decided to just eliminate Module generics entirely. We still apply the Callable trick so that subclasses of Module don't cause mypy to complain, but otherwise you are on your own for getting accurate type information out of Modules.
- The `Callable` trick on `forward` caused TorchScript to stop performing inference on the forward body, which is bad because in general we can only figure out the most accurate type by doing TorchScript inference. I added a special case to `infer_type` to ensure we always do inference for `Module.forward`, even if it is annotated (which it is), and another special case to make sure we ignore references to Callable (which we shouldn't process) recursively.
- When `__annotations__` is set on a class (as is the case when you add type annotations), JIT will incorrectly add further annotations to the parent class. This PR fixes #39463 by testing if `__annotations__` is defined on the specific class, excluding parent classes from the test.
- Added a missing fake source range to the invocation of `get_signature`
- In some cases, we cannot provide accurate typing for parameters on modules. This usually occurs when you have an `Optional[Tensor]` parameter, whose optional-ness is determined at `__init__` time. Without the annotation, TorchScript will infer the correct refined type depending on arguments to the constructor, but with the annotation, it will never do a refinement at `__init__` time, and you'll end up with the wrong type. I ended up just straight up deleting type annotations in all of these cases. A more robust fix might be to make some way to force TorchScript to do inference even if there is an explicit annotation, in case of refinement.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D21497397](https://our.internmc.facebook.com/intern/diff/D21497397)

[ghstack-poisoned]
ezyang added a commit that referenced this issue Jun 9, 2020
… inline"


Just because the annotations are inline doesn't mean the files type
check; most of the newly annotated files have type errors and I
added exclusions for them in mypy.ini.  The payoff of moving
all of these modules inline is I can delete the relevant code
generation logic for the pyi files (which was added ignore
annotations that weren't actually relevant anymore.)
Because we aren't actually typechecking these modules in most
cases, it is inevitable that some of these type annotations are wrong.
I slavishly copied the old annotations from the pyi files unless there
was an obvious correction I could make.  These annotations will probably
need fixing up later.

Moving these annotations inline was really hairy because of interactions
with JIT, and also the fact that Python type erasure is a lie (inheriting
from Generic *does* change the behavior of your object). Here is
the list of things I had to fix and/or work around:

- The quantization translation passes previously barfed if the weight/bias arguments were inferred to be Optional. Previously, TorchScript type inference would have inferred that these arguments were non-Optional (because type inference happens after module construction), but accurate type annotations on these parameters override this inference process, causing the arguments to be optional. I fixed this by making the quantized operator signatures line up exactly with the non-quantized signatures, so we never change the types of the arguments. This change involved mostly making a bunch of quantized kernels take optional, and then error if they were passed nullopt. (You can have any color you like, as long as it's non-null.)
- I removed Generic support for Module and ModuleList. The intentions behind this were admirable, but making Module inherit from Generic ended up being more headache than it was worth. First, in Python 3.6 and earlier, Generic has a nontrivial metaclass, which means all subsequent metaclass shenanigans (e.g., ScriptModule) need to line up the right metaclass. Second, Generic defines `__new__` specially, which means that `inspect.signature` doesn't work (see https://bugs.python.org/issue40897), and I found a case of people using precisely this in the wild. Between these two problems, and also the general problem which is that the parametrization here is an incomplete fix (parametrization helps with output typing, but it doesn't solve problems with input typing (and with mypy as it stands this is unfixable, see python/mypy#3028) I decided to just eliminate Module generics entirely. We still apply the Callable trick so that subclasses of Module don't cause mypy to complain, but otherwise you are on your own for getting accurate type information out of Modules.
- The `Callable` trick on `forward` caused TorchScript to stop performing inference on the forward body, which is bad because in general we can only figure out the most accurate type by doing TorchScript inference. I added a special case to `infer_type` to ensure we always do inference for `Module.forward`, even if it is annotated (which it is), and another special case to make sure we ignore references to Callable (which we shouldn't process) recursively.
- When `__annotations__` is set on a class (as is the case when you add type annotations), JIT will incorrectly add further annotations to the parent class. This PR fixes #39463 by testing if `__annotations__` is defined on the specific class, excluding parent classes from the test.
- Added a missing fake source range to the invocation of `get_signature`
- In some cases, we cannot provide accurate typing for parameters on modules. This usually occurs when you have an `Optional[Tensor]` parameter, whose optional-ness is determined at `__init__` time. Without the annotation, TorchScript will infer the correct refined type depending on arguments to the constructor, but with the annotation, it will never do a refinement at `__init__` time, and you'll end up with the wrong type. I ended up just straight up deleting type annotations in all of these cases. A more robust fix might be to make some way to force TorchScript to do inference even if there is an explicit annotation, in case of refinement.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D21497397](https://our.internmc.facebook.com/intern/diff/D21497397)

[ghstack-poisoned]
ezyang added a commit that referenced this issue Jun 9, 2020
Just because the annotations are inline doesn't mean the files type
check; most of the newly annotated files have type errors and I
added exclusions for them in mypy.ini.  The payoff of moving
all of these modules inline is I can delete the relevant code
generation logic for the pyi files (which was added ignore
annotations that weren't actually relevant anymore.)
Because we aren't actually typechecking these modules in most
cases, it is inevitable that some of these type annotations are wrong.
I slavishly copied the old annotations from the pyi files unless there
was an obvious correction I could make.  These annotations will probably
need fixing up later.

Moving these annotations inline was really hairy because of interactions
with JIT, and also the fact that Python type erasure is a lie (inheriting
from Generic *does* change the behavior of your object). Here is
the list of things I had to fix and/or work around:

- The quantization translation passes previously barfed if the weight/bias arguments were inferred to be Optional. Previously, TorchScript type inference would have inferred that these arguments were non-Optional (because type inference happens after module construction), but accurate type annotations on these parameters override this inference process, causing the arguments to be optional. I fixed this by making the quantized operator signatures line up exactly with the non-quantized signatures, so we never change the types of the arguments. This change involved mostly making a bunch of quantized kernels take optional, and then error if they were passed nullopt. (You can have any color you like, as long as it's non-null.)
- I removed Generic support for Module and ModuleList. The intentions behind this were admirable, but making Module inherit from Generic ended up being more headache than it was worth. First, in Python 3.6 and earlier, Generic has a nontrivial metaclass, which means all subsequent metaclass shenanigans (e.g., ScriptModule) need to line up the right metaclass. Second, Generic defines `__new__` specially, which means that `inspect.signature` doesn't work (see https://bugs.python.org/issue40897), and I found a case of people using precisely this in the wild. Between these two problems, and also the general problem which is that the parametrization here is an incomplete fix (parametrization helps with output typing, but it doesn't solve problems with input typing (and with mypy as it stands this is unfixable, see python/mypy#3028) I decided to just eliminate Module generics entirely. We still apply the Callable trick so that subclasses of Module don't cause mypy to complain, but otherwise you are on your own for getting accurate type information out of Modules.
- The `Callable` trick on `forward` caused TorchScript to stop performing inference on the forward body, which is bad because in general we can only figure out the most accurate type by doing TorchScript inference. I added a special case to `infer_type` to ensure we always do inference for `Module.forward`, even if it is annotated (which it is), and another special case to make sure we ignore references to Callable (which we shouldn't process) recursively.
- When `__annotations__` is set on a class (as is the case when you add type annotations), JIT will incorrectly add further annotations to the parent class. This PR fixes #39463 by testing if `__annotations__` is defined on the specific class, excluding parent classes from the test.
- Added a missing fake source range to the invocation of `get_signature`
- In some cases, we cannot provide accurate typing for parameters on modules. This usually occurs when you have an `Optional[Tensor]` parameter, whose optional-ness is determined at `__init__` time. Without the annotation, TorchScript will infer the correct refined type depending on arguments to the constructor, but with the annotation, it will never do a refinement at `__init__` time, and you'll end up with the wrong type. I ended up just straight up deleting type annotations in all of these cases. A more robust fix might be to make some way to force TorchScript to do inference even if there is an explicit annotation, in case of refinement.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D21497397](https://our.internmc.facebook.com/intern/diff/D21497397)

[ghstack-poisoned]
@suo suo added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Jun 9, 2020
ezyang added a commit that referenced this issue Jun 9, 2020
… inline"


Just because the annotations are inline doesn't mean the files type
check; most of the newly annotated files have type errors and I
added exclusions for them in mypy.ini.  The payoff of moving
all of these modules inline is I can delete the relevant code
generation logic for the pyi files (which was added ignore
annotations that weren't actually relevant anymore.)
Because we aren't actually typechecking these modules in most
cases, it is inevitable that some of these type annotations are wrong.
I slavishly copied the old annotations from the pyi files unless there
was an obvious correction I could make.  These annotations will probably
need fixing up later.

Moving these annotations inline was really hairy because of interactions
with JIT, and also the fact that Python type erasure is a lie (inheriting
from Generic *does* change the behavior of your object). Here is
the list of things I had to fix and/or work around:

- The quantization translation passes previously barfed if the weight/bias arguments were inferred to be Optional. Previously, TorchScript type inference would have inferred that these arguments were non-Optional (because type inference happens after module construction), but accurate type annotations on these parameters override this inference process, causing the arguments to be optional. I fixed this by making the quantized operator signatures line up exactly with the non-quantized signatures, so we never change the types of the arguments. This change involved mostly making a bunch of quantized kernels take optional, and then error if they were passed nullopt. (You can have any color you like, as long as it's non-null.)
- I removed Generic support for Module and ModuleList. The intentions behind this were admirable, but making Module inherit from Generic ended up being more headache than it was worth. First, in Python 3.6 and earlier, Generic has a nontrivial metaclass, which means all subsequent metaclass shenanigans (e.g., ScriptModule) need to line up the right metaclass. Second, Generic defines `__new__` specially, which means that `inspect.signature` doesn't work (see https://bugs.python.org/issue40897), and I found a case of people using precisely this in the wild. Between these two problems, and also the general problem which is that the parametrization here is an incomplete fix (parametrization helps with output typing, but it doesn't solve problems with input typing (and with mypy as it stands this is unfixable, see python/mypy#3028) I decided to just eliminate Module generics entirely. We still apply the Callable trick so that subclasses of Module don't cause mypy to complain, but otherwise you are on your own for getting accurate type information out of Modules.
- The `Callable` trick on `forward` caused TorchScript to stop performing inference on the forward body, which is bad because in general we can only figure out the most accurate type by doing TorchScript inference. I added a special case to `infer_type` to ensure we always do inference for `Module.forward`, even if it is annotated (which it is), and another special case to make sure we ignore references to Callable (which we shouldn't process) recursively.
- When `__annotations__` is set on a class (as is the case when you add type annotations), JIT will incorrectly add further annotations to the parent class. This PR fixes #39463 by testing if `__annotations__` is defined on the specific class, excluding parent classes from the test.
- Added a missing fake source range to the invocation of `get_signature`
- In some cases, we cannot provide accurate typing for parameters on modules. This usually occurs when you have an `Optional[Tensor]` parameter, whose optional-ness is determined at `__init__` time. Without the annotation, TorchScript will infer the correct refined type depending on arguments to the constructor, but with the annotation, it will never do a refinement at `__init__` time, and you'll end up with the wrong type. I ended up just straight up deleting type annotations in all of these cases. A more robust fix might be to make some way to force TorchScript to do inference even if there is an explicit annotation, in case of refinement.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D21497397](https://our.internmc.facebook.com/intern/diff/D21497397)

[ghstack-poisoned]
ezyang added a commit that referenced this issue Jun 9, 2020
Just because the annotations are inline doesn't mean the files type
check; most of the newly annotated files have type errors and I
added exclusions for them in mypy.ini.  The payoff of moving
all of these modules inline is I can delete the relevant code
generation logic for the pyi files (which was added ignore
annotations that weren't actually relevant anymore.)
Because we aren't actually typechecking these modules in most
cases, it is inevitable that some of these type annotations are wrong.
I slavishly copied the old annotations from the pyi files unless there
was an obvious correction I could make.  These annotations will probably
need fixing up later.

Moving these annotations inline was really hairy because of interactions
with JIT, and also the fact that Python type erasure is a lie (inheriting
from Generic *does* change the behavior of your object). Here is
the list of things I had to fix and/or work around:

- The quantization translation passes previously barfed if the weight/bias arguments were inferred to be Optional. Previously, TorchScript type inference would have inferred that these arguments were non-Optional (because type inference happens after module construction), but accurate type annotations on these parameters override this inference process, causing the arguments to be optional. I fixed this by making the quantized operator signatures line up exactly with the non-quantized signatures, so we never change the types of the arguments. This change involved mostly making a bunch of quantized kernels take optional, and then error if they were passed nullopt. (You can have any color you like, as long as it's non-null.)
- I removed Generic support for Module and ModuleList. The intentions behind this were admirable, but making Module inherit from Generic ended up being more headache than it was worth. First, in Python 3.6 and earlier, Generic has a nontrivial metaclass, which means all subsequent metaclass shenanigans (e.g., ScriptModule) need to line up the right metaclass. Second, Generic defines `__new__` specially, which means that `inspect.signature` doesn't work (see https://bugs.python.org/issue40897), and I found a case of people using precisely this in the wild. Between these two problems, and also the general problem which is that the parametrization here is an incomplete fix (parametrization helps with output typing, but it doesn't solve problems with input typing (and with mypy as it stands this is unfixable, see python/mypy#3028) I decided to just eliminate Module generics entirely. We still apply the Callable trick so that subclasses of Module don't cause mypy to complain, but otherwise you are on your own for getting accurate type information out of Modules.
- The `Callable` trick on `forward` caused TorchScript to stop performing inference on the forward body, which is bad because in general we can only figure out the most accurate type by doing TorchScript inference. I added a special case to `infer_type` to ensure we always do inference for `Module.forward`, even if it is annotated (which it is), and another special case to make sure we ignore references to Callable (which we shouldn't process) recursively.
- When `__annotations__` is set on a class (as is the case when you add type annotations), JIT will incorrectly add further annotations to the parent class. This PR fixes #39463 by testing if `__annotations__` is defined on the specific class, excluding parent classes from the test.
- Added a missing fake source range to the invocation of `get_signature`
- In some cases, we cannot provide accurate typing for parameters on modules. This usually occurs when you have an `Optional[Tensor]` parameter, whose optional-ness is determined at `__init__` time. Without the annotation, TorchScript will infer the correct refined type depending on arguments to the constructor, but with the annotation, it will never do a refinement at `__init__` time, and you'll end up with the wrong type. I ended up just straight up deleting type annotations in all of these cases. A more robust fix might be to make some way to force TorchScript to do inference even if there is an explicit annotation, in case of refinement.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: [D21497397](https://our.internmc.facebook.com/intern/diff/D21497397)

[ghstack-poisoned]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
oncall: jit Add this issue/PR to JIT oncall triage queue triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

4 participants
0