Speed up caching of subtype checks #12539
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is very performance critical. Implement a few micro-optimizations
to speed caching a bit. In particular, we use dict.get to reduce the
number of dict lookups required, and avoid tuple concatenation which
tends to be a bit slow, as it has to construct temporary objects.
It would probably be even better to avoid using tuples as keys
altogether. This could be a reasonable follow-up improvement.
Avoid caching if last known value is set, since it reduces the
likelihood of cache hits a lot, because the space of literal values
is big (essentially infinite).
Also make the global
strict_optional
attribute an instance-levelattribute for faster access, as we might now use it more frequently.
I extracted the cached subtype check code into a microbenchmark
and the new implementation seems about twice as fast (in an
artificial setting, though).
Work on #12526 (but should generally make things a little better).