-
-
Notifications
You must be signed in to change notification settings - Fork 32.1k
Test failure in tests/test_optimizer.py
#115058
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Same exact failure on Windows. Hence remove OS specific labels. |
cc @mdboom |
Hello! |
I've check it manually, and if I manually increase the counter (when it's equal to 255) it's become a zero.
|
Ok, now I'm understand: #define RARE_EVENT_INTERP_INC(interp, name) \
do { \
/* saturating add */ \
if (interp->rare_events.name < UINT8_MAX) interp->rare_events.name++; \
RARE_EVENT_STAT_INC(name); \
} while (0); \ This macros doesn't allow the counter to increase above ./python.exe -m test test_fileinput test_funcattrs test_functools test_generators test_genexps test_getopt test_gettext test_optimizer
Using random seed: 3588920112
0:00:00 load avg: 52.24 Run 8 tests sequentially
0:00:00 load avg: 52.24 [1/8] test_fileinput
0:00:00 load avg: 52.24 [2/8] test_funcattrs
0:00:00 load avg: 52.24 [3/8] test_functools
0:00:00 load avg: 52.24 [4/8] test_generators
0:00:00 load avg: 52.24 [5/8] test_genexps
0:00:00 load avg: 52.24 [6/8] test_getopt
0:00:00 load avg: 52.24 [7/8] test_gettext
0:00:00 load avg: 52.24 [8/8] test_optimizer
== Tests result: SUCCESS ==
All 8 tests OK.
Total duration: 898 ms
Total tests: run=458
Total test files: run=8/8
Result: SUCCESS So, probably it's enough to just change the type of |
I'm bringing @markshannon into the discussion here, because it's his design. I'm not sure we want to just increase the sizes here, certainly not on all of the fields. I think the better solution is probably to "reset" the builtins counter to 0 at some point after most startup has happened. Or maybe modify the test to reset the counter before starting -- I think these high numbers are the result of the test suite being unusual, and doesn't reflect the rareness of builtin modification one would expect in normal code. |
Or change the test from |
I worry that could hide a bona fide bug. I think the better solution is to add a function in testinternalcapi to reset the counter and then run the test as-is. @Eclips4: Do you want to modify your existing PR to do that, or should I take it? (I'm happy either way). |
Reseting the counter to zero seems better for testing, but it is possible an optimization would rely on the count being correct. |
Hi @markshannon and @mdboom! I apologize if this is a silly question. I was curious about when the counter reset would occur? Would it be after the tests from one test module have finished running, the counter hits 255, or when another condition is met? |
My thought is that there would be a new function added to |
I'm agree that this is the better solution. I'm going to update my PR |
I have updated the PR, please check it :) |
Uh oh!
There was an error while loading. Please reload this page.
Bug report
Bug description:
As of 01dceba, I'm seeing a test failure on macOS and iOS in the
tests/test_optimizer.py
test case.If you run the test by itself, (
python -m test tests/test_optimizer.py
) it passes.However, if you run the entire test suite in default (alphabetical) order, it fails:
I've been able to narrow down a minimal reproduction case if you run the following sequence of tests:
If you remove any one of the tests before
test_optimizer
, the test passes.I'm observing the failure on both x86_64 (Ventura 13.5.2) and M1 (Ventura 13.6.2) hardware.
CPython versions tested on:
CPython main branch
Operating systems tested on:
macOS, Other
Linked PRs
reset_rare_event_counters
function in_testinternalcapi
#115128The text was updated successfully, but these errors were encountered: