Description
These are some strawman thoughts about how to provide handling of asynchronous events in a simple way in CircuitPython. This was also discussed at some length in our weekly audio chat on Nov 12, 2018, starting at 1:05:36: https://youtu.be/FPqeLzMAFvA?t=3936.
Every time I look at the existing solutions I despair:
- asyncio: it's complicated, has confusing syntax, and pretty low level. Event loops are not inherent in the syntax, but are part of the API.
- interrupt handlers: MicroPython has them, but they have severe restrictions: should be quick, can't create objects.
- callbacks: A generalization of interrupt handlers, and would have similar restrictions.
- threads: Really hard to reason about.
I don't think any of these are simple enough to expose to our target customers.
But I think there's a higher-level mechanism that would suit our needs and could be easily comprehensible to most users, and that's
Message Queues
A message queue is just a sequence of objects, usually first-in-first-out. (There could be fancier variations, like priority queues.)
When an asynchronous event happens, the event handler (written in C) adds a message to a message queue when. The Python main program, which could be an event loop, processes these as it has time. It can check one or more queues for new messages, and pop messages off to process them. NO Python code ever runs asynchronously.
Examples:
- Pin interrupt handler: Add a timestamp to a queue of timestamps, recording when the interrupt happened.
- Button presses: Add a bitmask of currently pressed buttons to the queue.
- UART input: Add a byte to the queue.
- I2CSlave: Post an I2C message to a queue of messages.
- Ethernet interface: Adds a received packet to a queue of packets.
When you want to process asynchronous events from some builtin object, you attach it to a message queue. That's all you have to do.
There are even already some Queue classes in regular Python that could serve as models: https://docs.python.org/3/library/queue.html
Some example strawman code is below. The method names are descriptive -- we'd have to do more thinking about the API and its names.
timestamp_queue = MessageQueue() # This is actually too simple: see below.
d_in = digitalio.DigitalIn(board.D0)
d_in.send_interrupts_to_queue(timestamp_queue, trigger=RISE)
while True:
timestamp = timestamp_queue.get(block=False, timeout=None) # Or could check for empty (see UART below)
if timestamp: # Strawman API: regular Python Queues actually throw an exception if nothing is read.
# Got an interrupt, do something.
continue
# Do something else.
Or, for network packets:
packet_queue = MessageQueue()
eth = network.Ethernet()
eth.send_packets_to_queue(packet_queue)
...
For UART input:
uart_queue = MessageQueue()
uart = busio.UART(...)
uart.send_bytes_to_queue(uart_queue)
while True:
if not uart_queue.is_empty:
char = uart_queue.pop()
Unpleasant details about queues and storage allocation:
It would be great if queues could just be potentially unbounded queues of arbitrary objects. But right now the MicroPython heap allocator is not re-entrant, so an interrupt handler or packet receiver, or some other async thing can't allocate the object it want to push on the queue. (That's why MicroPython has those restrictions on interrupt handlers.) The way around that is pre-allocate the queue storage, which also makes it bounded. Making it bounded also prevents queue overflow: if too many events happen before they're processed, events just get dropped (say either oldest or newest). So the queue creation would really be something like:
# Use a list as a queue (or an array.array?)
timestamp_queue = MessageQueue([0, 0, 0, 0])
# Use a bytearray as a queue
uart_queue = MessageQueue(bytearray(64))
# Queue up to three network packets.
packet_queue = MessageQueue([bytearray(1500) for _ in range(3)], discard_policy=DISCARD_NEWEST)
The whole idea here is that event processing takes place synchronously, in regular Python code, probably in some kind of event loop. But the queues take care of a lot of the event-loop bookkeeping.
If and when we have some kind of multiprocessing (threads or whatever), then we can have multiple event loops.