-
-
Notifications
You must be signed in to change notification settings - Fork 8.2k
py/gc: When heap autosplit is enabled, limit new heap size. #13035
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
py/gc: When heap autosplit is enabled, limit new heap size. #13035
Conversation
@tannewt CircuitPython may find this interesting. |
Code size report:
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #13035 +/- ##
=======================================
Coverage 98.36% 98.36%
=======================================
Files 159 159
Lines 20978 20978
=======================================
Hits 20636 20636
Misses 342 342 ☔ View full report in Codecov by Sentry. |
py/gc.c
Outdated
// into allocation failures elsewhere in the system. | ||
|
||
size_t total_free = gc_get_total_free(); | ||
size_t grow_limit = total_free * 75 / 100; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this grow limit be implemented to be configurable? People who want to use all the memory possible aren't limited in those instances, and advanced users can configure this to whatever value they like.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @bobveringa, it's an idea worth considering for sure.
To be clear on a small difference, this isn't a hard limit - the Python heap will still grow to use all of the available RAM if the alternative is a MemoryError
.
This limit is to limit "greedily" adding more memory than is needed to the Python heap, it eases back to avoid prematurely taking all of the system heap for Python.
I just tried out these changes. The default setting doesn't really seem to have an effect. The total still jumps from 64k to 128k. So just to be sure, updated the definition on my (custom) board to I don't have the time to figure out the precise workings of this system yet. But if you have things that would like me to test, then I'm happy to help. |
Thanks @bobveringa . That experience suggests to me either that either this will need to be runtime tunable, or that it's the wrong approach... |
For me, after adjusting the setting to try and preserve some more memory, it worked well. My use-case is a bit different from most, I suspect, as I also have a bunch of user C modules that need large blocks of continuous memory (3x 5000 bytes). It is working well enough that if this isn't in the next release (or another solution) I will probably use this solution anyway as a patch just so I can upgrade my MicroPython version. |
Previously the "auto split" function would try to "greedily" double the heap size each time, and only tries to grow by a smaller increment if it would be not possible to allocate the memory. Now it will stop adding large chunks of heap once the free system heap is lower than a limit. The limit is exposed as micropython.heap_sys_reserve() so it can be tweaked for particular firmware use cases. With this change and no tuning of the limit, the Python heap growth on original ESP32 is closer to the v1.20 Python heap size after the first time it expands. Signed-off-by: Angus Gratton <angus@redyak.com.au>
560734e
to
de5fae0
Compare
Thanks @bobveringa for the update. I've made some changes, this now has a runtime tuneable value "micropython.heap_sys_reserve()`. See the main PR text for some details. |
After discussing with @jimmo, we think this change is too complex/fine-tuned. And the semantics of the new As an alternative I've prepared #13219. That's a simple change, and users compiling their own board can change the new |
Closing in favour of #13219. |
@bobveringa if it's still not working for you, you can tune |
Previously the "auto split" function (added in #12141) would try to "greedily" double the heap size each time, and only tries to grow by a smaller increment if it would be not possible to allocate the memory.
Greedily growing the heap like this reduces the chance of Python fragmentation, but increases the chance of prematurely running out of system heap (even when there's quite a lot of Python heap free.)
This adds a new function
micropython.heap_sys_reserve()
and a port-specific macro,MP_PLAT_DEFAULT_HEAP_SYS_RESERVE
. This is the amount of total system heap to try and keep free, and a functiongc_get_total_free()
to get the total free system heap. These are added on esp32, as this is the only port that enables "auto split" by default.On original ESP32 without PSRAM, I've been running this test program that first allocates around 60,000 bytes of Python data (in small chunks), then initializes ESP32 Wi-Fi STA mode, and then creates as many concurrent TLS connections as it can.
heap_sys_reserve
for the system heap, the Python total heap size grows from 64000 to 111168 bytes. One TLS socket can be created.micropython.heap_sys_reserve(100000)
at the beginning allows two TLS sockets to be created. The Python total heap size grows from 64000 to 75712 bytes. Setting the value higher doesn't improve this further (limited by actual RAM requirements.)So , this seems promising but the side effects may be a little unpredictable, still:
I've experimented with this version a little bit and it seems to perform well, but I'm not 100% sure. Anyone who is experiencing issues running out of system heap and wants to test it, please report back if it helps you.
This work was funded through GitHub Sponsors.