8000 GH-126491: Lower heap size limit with faster marking by markshannon · Pull Request #127519 · python/cpython · GitHub
[go: up one dir, main page]

Skip to content

GH-126491: Lower heap size limit with faster marking #127519

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 24 commits into from
Dec 6, 2024
Merged
Changes from 1 commit
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
3038a78
Faster marking of reachable objects
markshannon Nov 9, 2024
c024484
Handle more classes in fast marking
markshannon Nov 10, 2024
e8497ae
Add support for asyn generators on fast path. Simplify counting
markshannon Nov 11, 2024
4c1a6bc
Check stackref before converting to PyObject *
markshannon Nov 11, 2024
6efb4c0
Rename stuff
markshannon Nov 13, 2024
b1c7ab0
Remove expand_region_transitively_reachable and use move_all_transiti…
markshannon Nov 13, 2024
07f228b
Merge branch 'main' into faster-marking
markshannon Dec 2, 2024
51ff78e
Fix compiler warnings and linkage
markshannon Dec 2, 2024
df907b5
Fix another linkage issue
markshannon Dec 2, 2024
9ca64f5
Try 'extern'
markshannon Dec 2, 2024
bda13f4
Go back to PyAPI_FUNC and move functions together
markshannon Dec 2, 2024
d9d63c8
Use _Py_FALLTHROUGH
markshannon Dec 2, 2024
57b8820
Add necessary #ifndef Py_GIL_DISABLED
markshannon Dec 2, 2024
a607059
Go back to using tp_traverse, but make traversal more efficient
markshannon Dec 3, 2024
1545508
Tidy up
markshannon Dec 3, 2024
a1a38c8
A bit more tidying up
markshannon Dec 3, 2024
68fc90b
Move all work to do calculations to one place
markshannon Dec 3, 2024
8893cf5
Assume that increments are 50% garbage for work done calculation
markshannon Dec 3, 2024
ba20c7c
Elaborate comment
markshannon Dec 4, 2024
8262bf0
More tweaking of thresholds
markshannon Dec 4, 2024
3c2116e
Do some algebra
markshannon Dec 4, 2024
72d0284
Revert to 2M+I from 3M+I
markshannon Dec 4, 2024
0f182e2
Address review comments
markshannon Dec 5, 2024
d3c21bb
Address review comments and clarify code a bit
markshannon Dec 5, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Elaborate comment
  • Loading branch information
markshannon committed Dec 4, 2024
commit ba20c7c990ff0dbb6af8ec27b5bb8780015f3edc
13 changes: 8 additions & 5 deletions Python/gc.c
Original file line number Diff line number Diff line change
Expand Up @@ -1550,13 +1550,16 @@ assess_work_to_do(GCState *gcstate)
/* The amount of work we want to do depends on two things.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it worth linking to the doc from here?

* 1. The number of new objects created
* 2. The heap size (up to twice the number of new objects, to avoid quadratic effects)
* 3. The amount of garbage.
*
* For a large, steady state heap, the amount of work to do is three times the number
* of new objects added to the heap. This ensures that we stay ahead in the
* worst case of all new objects being garbage.
* We cannot know how much of the heap is garbage, but we know that no reachable object
* is garbage. We make a (fairly pessismistic) assumption that half the heap not
* reachable from the roots is garbage, and count collections of increments as half as efficient
* as processing the heap as the marking phase.
*
* This could be improved by tracking survival rates, but it is still a
* large improvement on the non-marking approach.
* For a large, steady state heap, the amount of work to do is at least three times the
* number of new objects added to the heap. This ensures that we stay ahead in the
* worst case of all new objects being garbage.
*/
intptr_t scale_factor = gcstate->old[0].threshold;
if (scale_factor < 2) {
Expand Down
Loading
0