8000 RP2/machine_uart.c: Fix poll ioctl. by iabdalkader · Pull Request #7552 · micropython/micropython · GitHub
[go: up one dir, main page]

Skip to content

RP2/machine_uart.c: Fix poll ioctl. #7552

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

iabdalkader
Copy link
Contributor
  • The RX IRQ does not trigger if the FIFO is less than the trigger level.
  • Use machine_uart_any to check for readability to drain the RX FIFO.

* The RX IRQ does not trigger if the FIFO is less than the trigger level.
* Use machine_uart_any to check for readability to drain the RX FIFO.
@dpgeorge
Copy link
Member

The RX IRQ does not trigger if the FIFO is less than the trigger level

Ok, so I now understand what is going on. Maybe the solution is to then change the FIFO level to 1 byte (so 1 byte in the FIFO triggers the RX IRQ)?

@robert-hh what do you suggest here? At the moment it seems that machine_uart_read() will return EAGAIN even if there are bytes available, if those bytes are just in the hardware UART FIFO and have not reached the level to trigger the RX IRQ.

(Is it true that the RX FIFO is enabled and the trigger level is above 1?)

@robert-hh
Copy link
Contributor
robert-hh commented Jul 23, 2021

machine_uart_read() drains the FIFO if no characters are in the ring buffer until either the timeout expires or the requested number of characters are read. I've tested that by sending singe bytes. The suggested change is for uart_ioctl, where indeed the initial implementation would not detect few characters in the FIFO. So the suggested change is fine. The smallest FIFO level value for the IRQ trigger is 4. it cannot be set to 1.

Edit: Just checking for readable in machine_uart_ioctl() is indeed better that uart_drain_rx_fifo(), since the latter discards data when the ring buffer is full.

@dpgeorge
Copy link
Member

machine_uart_read() drains the FIFO if no characters are in the ring buffer until either the timeout expires or the requested number of characters are read.

Where does it drain the FIFO if no chars are in the buffer? Inspecting the code, I see:

STATIC mp_uint_t machine_uart_read(mp_obj_t self_in, void *buf_in, mp_uint_t size, int *errcode) {
    machine_uart_obj_t *self = MP_OBJ_TO_PTR(self_in);
    uint64_t t = time_us_64() + (uint64_t)self->timeout * 1000;     <-- t=time_us_64() if timeout==0
    uint64_t timeout_char_us = (uint64_t)self->timeout_char * 1000;
    uint8_t *dest = buf_in;

    for (size_t i = 0; i < size; i++) {
        // Wait for the first/next character
        while (ringbuf_avail(&self->read_buffer) == 0) {   <-- assume nothing on ringbuf
            if (time_us_64() > t) {  // timed out          <-- condition will pass if timeout==0
                if (i <= 0) {                              <-- no characters read, condition passes
                    *errcode = MP_EAGAIN;                  <-- EAGAIN but possible bytes in FIFO
                    return MP_STREAM_ERROR;
                } else {
                    return i;
                }
            }
    ...

@robert-hh
Copy link
Contributor

A few line down the road within the while loop, if timeout did not expire.

STATIC mp_uint_t machine_uart_read(mp_obj_t self_in, void *buf_in, mp_uint_t size, int *errcode) {
    machine_uart_obj_t *self = MP_OBJ_TO_PTR(self_in);
    uint64_t t = time_us_64() + (uint64_t)self->timeout * 1000;
    uint64_t timeout_char_us = (uint64_t)self->timeout_char * 1000;
    uint8_t *dest = buf_in;

    for (size_t i = 0; i < size; i++) {
        // Wait for the first/next character
        while (ringbuf_avail(&self->read_buffer) == 0) {
            if (time_us_64() > t) {  // timed out
                if (i <= 0) {
                    *errcode = MP_EAGAIN;
                    return MP_STREAM_ERROR;
                } else {
                    return i;
                }
            }
            MICROPY_EVENT_POLL_HOOK
            // Force a few incoming bytes to the buffer
            self->read_lock = true;
            uart_drain_rx_fifo(self);
            self->read_lock = false;
        }
        *dest++ = ringbuf_get(&(self->read_buffer));
        t = time_us_64() + timeout_char_us;
    }
    return size;
}

@dpgeorge
Copy link
Member

But if timeout==0 it can return EAGAIN while there are bytes waiting in the hardware FIFO.

@robert-hh
Copy link
Contributor

Indeed. So I could add a single drain upfront the reading loop, of check for timeout being at least one char time, like timeout_char.

@dpgeorge
Copy link
Member

Or change the while (ringbuf_avail(&self->read_buffer) == 0) { to the same logic in this PR (ie also check uart_is_readable)

@robert-hh
Copy link
Contributor

Just to add that as a second condition into the while statement would not work, because the data would not get to the ring buffer. Adding drain ahead is more robust.

@dpgeorge
Copy link
Member

a second condition into the while statement would not work, because the data would not get to the ring buffer

True.

Adding drain ahead is more robust.

But couldn't that possibly lead to data loss, if the ringbuf overruns at that point? So maybe only drain at the start if the ringbuf is empty?

@robert-hh
Copy link
Contributor

It could. But in a situation where the data is sent faster than consumed there will be most likely anyhow data loss. But I can change that.

@dpgeorge
Copy link
Member

Merged in 2e62e13

@dpgeorge dpgeorge closed this Jul 25, 2021
@iabdalkader iabdalkader deleted the rp2_uart_ioctl branch July 25, 2021 11:52
RetiredWizard pushed a commit to RetiredWizard/micropython that referenced this pull request Feb 9, 2023
add support for array.extend(iterable) - fixes micropython#3913
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
0