drivers/genx320: Added raw event reading functionality#2970
drivers/genx320: Added raw event reading functionality#2970jeff-shulkin wants to merge 1 commit intoopenmv:masterfrom
Conversation
|
Code Size Report:
|
|
@jeff-shulkin - Thanks for this PR. To improve performance, you should actually not perform the memcpy at all. Do this, update the IOCTL to accept a pointer to an image object. The csi_snapshot internally then should be passed that. Then, update the logic in here to like: This will remove that memcpy. For the python code in the app, you'll also want to remove the fifo buffer implemented in python, that will not be needed anymore as you can use the fifo buffer inside the camera driver when it's in fifo mode. So, basically, you directly pass the result of calling read_events_raw to the protocol channel. Note that you will need to not call snapshot() in the protocol callback, so, in the script you'll want to wait for the first read or size request access of the protocol channel to set a flag to prime the pipe... and then in the main while loop start doing snapshot and return that in the protocol callback. When in image fifo mode (frame buffers > 3) the first call to snapshot will kick off the dma system and then return the first frame. The next frames behind it are queued up in the fifo. If the FIFO overflows, it's completely reset, which will cause data loss. This is why you want to wait until you get the first request to read the channel or similar from the PC before calling the first snapshot. Calling snapshot in the protocol callback would be the easiest thing to do... but, then things get a bit re-entrant. |
dd97734 to
dd5f987
Compare
|
Updated the code to get rid of the memcpy. There's some formatting issues I'm not sure how to best resolve, as fixing them would require either shortening the IOCTL call name or fixing the formatting for every single IOCTL definition in omv_csi_ioctl_t. In any case, I should also have a PR up in openmv-projects with the raw streaming soon. I'm still working through the kinks of getting this working without any dropped data, so some more guidance when I put up that PR would be greatly appreciated. |
|
For the formatting, you just need to add spaces as it shows. Yes, fixing every ioctl spacing. Thanks for your work on this, Ibrahim, and I discussed in our last meeting, potentially adding RAW event processing and support to the OpenMV IDE. It's not necessarily hard, but I have other, more pressing issues right now. What you are doing currently is the fastest way to get the best data transfer speed for now. |
dd5f987 to
f1e704f
Compare
|
Sorry for the long lag-time on this PR, I've been busy over the past couple weeks. I just made a PR on openmv-projects (openmv/openmv-projects#7) that shows how to stream out the 32-bit words. I was able to get rid of the Python-side FIFO buffer completely, but I'm still getting weird behavior. To be more specific, the EVS still seems to be dropping data, resulting in skipped frames when visualizing the EVS histograms. I'm not sure if its inefficiency in the altered C code or the MicroPython scripts (though I heavily suspect the latter), so any insight on how to reduce the amount of dropped data would be greatly appreciated. Furthermore, I haven't pushed this change yet, but I did have to change a couple lines of code to get rid of the FIFO buffer. More specifically, I had to change the suggested lines in GENX320's ioctl from this: to this: This is a pretty hacky solution though. Is there a better way of writing this? The reason I reset ret to img->w * img->h is because, if we just return ret as zero, then the branch: never triggers. |
|
@jeff-shulkin - Can you look at this script I did for RAW bayer image tuning on the desktop? https://github.com/openmv/openmv-projects/blob/master/ccm-tuning/ccm_tuning_on_cam.py It shows how to send a frame buffer to the PC bypassing the stream buffer. Then, you should just need to turn on the csi.framebuffer() fifo by passing in a large frame buffer count, and then finally, you need to make buffer size very large to maximum transfer bandwidth. Doing small 2K event buffers is unlikely to work well. Aim for like 16K to 32K buffer sizes. As for your question, just change: To: Note that in the example I used... the buffer I am pulling data from doesn't move, you'll have to re-create the memory view after each snapshot return becuase it will be to a new buffer address if you have more than 1 buffer. |
f1e704f to
ed57a27
Compare
|
Just updated the C code to check for a successful return from snapshot as suggested. However, concerning the micropython scripts, I'm running into an issue with inefficiency. After taking a look at the ccm_tuning scripts, I altered my EventChannel class to look like so: The problem is that, with the way the script is set up now, its flooding the USB stream with old events, leading to a massive slowdown in speed. If possible, I'd like to do the same optimization as the unpacked ndarray streaming script, where we just send new events. I believe this change would be trivial, as it would just mean that we read from 0 until size, where size is the number of new events instead of the full buffer size. At the same time, with how the ioctl call for GENX320_READ_EVENTS_RAW works, I'm not sure how to get just the number of new events. Do you have any advice in this matter? |
|
@jeff-shulkin - Let me give the RAW mode a shot. I think I might have some time today. |
|
@kwagyeman Figured out the streaming issue I was running into earlier. The problem was that I forgot to actually use the system_primed variable in the script's superloop, so it was extremely re-entrant. I'll update the script in openmv-projects to be correct, so you should be able to check out RAW mode in the next hour or so. Problem I'm running into right now is that, if I set CSI_FIFO_DEPTH > 1, the camera freezes on me at 8192 events and above, forcing a resync. If you have any insight here, that would be well appreciated. |
|
Hi, I'll comment directly on the project's PR. This feature appears fine here in the firmware. |
Addresses #2945. Adds IOCTL call
OMV_CSI_IOCTL_GENX320_READ_EVENTS_RAW, which simply returns the 32-bit words provided by the GENX320 in EVT2.0 format. When streaming, this allows a downstream PC to handle 3x the amount of events.