-
-
Notifications
You must be signed in to change notification settings - Fork 8.2k
rp2: Crash with hard pin IRQ #6957
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
from: library/machine.Pin.html
Having read |
This bug has not yet been fixed with the latest firmware. I guess it's still early days for |
@robert-hh nice image |
Playing a little bit around with the code I have the hard irq running. I removed the calls for gc.lock() and gc.unlock in mpirq.c. Not safe, but for the given code example it works. So I could get some latency timings, which are not a real improvement over soft irq. The latency is 21 to 49 µs with an avarage of about 28µs. Just 7 µs less that the soft irq. With that numbers, hard irq is hardly worth the restrictions that come along with it. Scope screen shot below. |
The problem with soft IRQ's arises when the interrupt happens while a GC is in progress. Latency is then much greater than for a hard IRQ. |
Looking for another problem, I came along this "lock on hard irq" topic. gc_lock() calls GC_ENTER(), which in term is calling mutex_enter_blocking(). That's a function which according to the specs must not called in an IRQ. Replacing the mutex calls with critsec calls changes the behavior, but I do not know if that is the proper cure. I know too little about GC. |
Thanks @peterhinch for the report. I can reproduce the issue using the code above. The problem is that it's tricky to coordinate handling IRQs with multithreading (which CPU handles the IRQ, and what can each CPU do when one is handling an IRQ?). For now please just use soft IRQs. |
@dpgeorge You have probably seen myabove test of replacing the mutex calls in mpthreadport.c by critsec calls. With that, the test code of Peter works. According to the sdk documentation the mutex calls must not be called in an IRQ context. But memory management, dual threads and IRQ is still an open topic. Code used for the test:
|
I posted #7217 to try and fix this issue. It also aims to make hard IRQ have less jitter (at the expense of RAM usage, putting more code into RAM so that the external flash access doesn't slow things down). I also noticed that the hard IRQ jitter improves (is less) if the main code just executes If you could test it that would be great! |
Using the previous test script, it fixes this issue. Without any other load than in this test script, the jitter is the same (stddev 3.5µs, mean ~29µs, min 16µs, max 40 µs). Indeed is the jitter smaller if the main code just executes |
Great, thanks for testing it.
That's about what I see (although my equipment is not as fancy as yours!). I'm not sure of the exact architecture of the rp2040, but it may be that IRQs are stalled until any outstanding external-SPI-flash requests are complete. So if the code is executing something from flash and the flash must load the machine code, then IRQs must wait until that is complete (even if the IRQ runs from RAM). That's my guess as to why the jitter there with the original script at the top of this issue, but not with |
So that hard IRQs can run correctly. Fixes issue micropython#6957. Signed-off-by: Damien George <damien@micropython.org>
So that hard IRQs can run correctly. Fixes issue micropython#6957. Signed-off-by: Damien George <damien@micropython.org>
…n-main Translations update from Hosted Weblate
Measuring hard IRQ latency with this script:
The usual latency was a creditable 25μs, but I have measured 100μs. It was not practicable to measure a worst case because the the machine suffers a hard crash usually almost immediately. If I comment out the busywork, leaving just the
sleep_ms(10)
, it runs indefinitely with about 20μs latency.The text was updated successfully, but these errors were encountered: