E5C7 Raw stream by jeff-shulkin · Pull Request #7 · openmv/openmv-projects · GitHub
[go: up one dir, main page]

Skip to content

Raw stream#7

Open
jeff-shulkin wants to merge 2 commits intoopenmv:masterfrom
jeff-shulkin:raw_stream
Open

Raw stream#7
jeff-shulkin wants to merge 2 commits intoopenmv:masterfrom
jeff-shulkin:raw_stream

Conversation

@jeff-shulkin
Copy link

Adds MicroPython scripts for streaming raw 32-bit words from the GENX320 to the user's PC.

@kwagyeman
Copy link
Member

Please take a look at this comment: openmv/openmv#2970 (comment)

@kwagyeman kwagyeman self-requested a review February 12, 2026 18:39
@jeff-shulkin jeff-shulkin force-pushed the raw_stream branch 2 times, most recently from 552a20f to 55bd352 Compare February 16, 2026 17:09
Comment on lines +31 to +73
# Prime DMA system before reading raw events.
# Prime on first size or poll request.
primed = False

# Only grab next frame when current FIFO buffer has been fully streamed.
frame_available = False

class EventChannel:
def size(self):
global primed
if not primed:
primed = True

return event_length

def shape(self):
return (event_length, 1)

def read(self, offset, size):
global frame_available

if frame_available:
end = offset + size
mv = events_mv[offset:end]
if end == event_length:
frame_available = False
return mv
return bytes(size)

def poll(self):
global primed
if not primed:
primed = True

return frame_available


protocol.register(name='events', backend=EventChannel())

while True:
if not frame_available and primed:
csi0.ioctl(csi.IOCTL_GENX320_READ_EVENTS_RAW)
frame_available = True No newline at end of file
Copy link
Member
@kwagyeman kwagyeman Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
8000
# Prime DMA system before reading raw events.
# Prime on first size or poll request.
primed = False
# Only grab next frame when current FIFO buffer has been fully streamed.
frame_available = False
class EventChannel:
def size(self):
global primed
if not primed:
primed = True
return event_length
def shape(self):
return (event_length, 1)
def read(self, offset, size):
global frame_available
if frame_available:
end = offset + size
mv = events_mv[offset:end]
if end == event_length:
frame_available = False
return mv
return bytes(size)
def poll(self):
global primed
if not primed:
primed = True
return frame_available
protocol.register(name='events', backend=EventChannel())
while True:
if not frame_available and primed:
csi0.ioctl(csi.IOCTL_GENX320_READ_EVENTS_RAW)
frame_available = True
# Only grab next frame when current FIFO buffer has been fully streamed.
frame_available = True
class EventChannel:
def size(self):
return len(events_mv)
def shape(self):
return (len(events_mv), 1)
def read(self, offset, size):
global frame_available
if frame_available:
end = offset + size
mv = events_mv[offset:end]
if end == len(events_mv):
frame_available = False
return mv
return bytes(size)
def poll(self):
return frame_available
protocol.register(name='events', backend=EventChannel())
while True:
if not frame_available:
events = csi0.ioctl(csi.IOCTL_GENX320_READ_EVENTS_RAW)
events_mv = memoryview(events.bytearray())
frame_available = True

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since you have the fifo enabled, you have to treat each events object returned as a new image which you have to get the byte array to.

What's happening above is that the frames accumulate in the fifo onboard. When a USB access happens it's going to suck out whatever the latest frame is, unlock the main loop to pop another off the head of the fifo, and then lock it again for USB to suck out another frame.

As long as the rate at which frames are removed from the fifo is faster than the fifo filling up, then you shouldn't drop any data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

0