US11039092B2 - Sparse scanout for image sensors - Google Patents
Sparse scanout for image sensors Download PDFInfo
- Publication number
- US11039092B2 US11039092B2 US16/185,265 US201816185265A US11039092B2 US 11039092 B2 US11039092 B2 US 11039092B2 US 201816185265 A US201816185265 A US 201816185265A US 11039092 B2 US11039092 B2 US 11039092B2
- Authority
- US
- United States
- Prior art keywords
- scanout
- image data
- area
- coordinate
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H04N5/3454—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/40—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
- H04N25/44—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array
- H04N25/443—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array by reading pixels from selected 2D regions of the array, e.g. for windowing or digital zooming
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/76—Addressed sensors, e.g. MOS or CMOS sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/08—Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/12—Systems in which the television signal is transmitted via one channel or a plurality of parallel channels, the bandwidth of each channel being less than the bandwidth of the television signal
Definitions
- the technology herein relates to image sensor readout, and more particularly to systems, methods, programs and computer readable media for reading out and/or transmitting less than all image data acquired by an image sensor.
- the technology herein also relates to improved high-speed protocols for point-to-point image and video transmission between cameras and host devices.
- AR augmented reality
- VR virtual reality
- automotive use lenses (for instance, fisheye lenses) that do not fill the entire frame, so wasted pixels are transmitted over the image link.
- Rectangular cropping works for some lenses, but is not usually useful for fisheye lenses. Additionally, some applications wish to transmit non-contiguous cropping regions (“regions of interest” or “ROIs”), which also would benefit from transmitting less than all of the pixels a particular image representation is capable of encoding, e.g., only the pixels in the image that are used—which is not a simple rectangular crop. Also, single rectangular cropping typically does not permit multiple regions of interest.
- Short packets can be expensive on legacy PHY implementations such as some D-PHY and C-PHY physical layer implementations (e.g., those that do not have Latency Reduction and Transport Efficiency or LRTE, which was introduced in MIPI CSI-2 v2.0 to provide more bandwidth per packet for the same link speed) and may not be trivial even with modern PHYs.
- legacy PHY implementations such as some D-PHY and C-PHY physical layer implementations (e.g., those that do not have Latency Reduction and Transport Efficiency or LRTE, which was introduced in MIPI CSI-2 v2.0 to provide more bandwidth per packet for the same link speed) and may not be trivial even with modern PHYs.
- the short packet approach would not necessarily be supported by serializers and deserializers (“serdes”—typically hardware/devices that efficiently serialize and deserialize image streams) in the ecosystem.
- Serdes typically hardware/devices that efficiently serialize and deserialize image streams
- SoC system on a chip
- FIG. 1 is a schematic diagram of an example system including an image sensor and a host.
- FIG. 2A shows an example standardized serial camera data protocol.
- FIG. 2B shows an example fisheye lens image
- FIG. 2C shows an example projection of a fisheye lens onto the image plane of a rectangular image sensor.
- FIG. 3A shows an example sensor image having three scanout regions A, B and C.
- FIG. 3B shows an example readout transmission format for the FIG. 3A scanout regions.
- FIGS. 4A and 4B show example receiver decoding processes.
- FIG. 5A shows an example elliptical readout area.
- FIG. 5B shows an example readout transmission format for the FIG. 5A elliptical readout area.
- FIG. 6 shows an example readout transmission format including embedded region of interest data.
- FIG. 7 shows an example mapping between physical sensor coordinates for scanout areas and logical scanout coordinates.
- non-limiting examples herein split region-of-interest information functionality from sparse scanout information functionality. Addressing sparse scanout independently from regions-of-interest is useful because if one puts aside the region-of-interest feature, it is possible to scan out arbitrary (non-rectangular) shapes such as for example an ellipse or any other desired shape. It is further possible in some example non-limiting implementations to use sparse scanout functions to enable, support or complement high-efficiency region-of-interest identification if desired, but sparse scanout as described herein has significant flexibility and functionality (e.g., enabling non-rectangular high-efficiency image cropping) beyond region-of-interest identification.
- the MIPI CSI protocol is extended with support for pixel coordinates in long packets.
- the receiver/decoder uses these pixel coordinates to compute where in memory these pixels should be stored and/or where on a display they should be displayed.
- One embodiment extends the protocol header to include pixel coordinates. This approach would not be backwards compatible (and would interfere with serdes).
- Another option would be to use pixel data, if mutually negotiated by the SoC and the image sensor. Pixel coordinates for crop regions on the sensor in non-rectangular crops can be computed with a low-cost approximation (for instance, a piece-wise linear curve for start and stop X coordinates).
- Such example non-limiting techniques as disclosed herein can be used for truly sparse scanout of any arbitrarily shaped images areas including for example elliptical readout. These techniques will be helpful to save MIPI and serializer/deserializer (“serdes”) bandwidth in automotive, telepresence, surveillance and other applications, allowing for more cameras per vehicle or other host and/or lower cost serdes. Such techniques can be implemented in a way that is compatible and non-disruptive with prior MIPI standardized approaches.
- Example non-limiting implementations provide image data transmission comprising: defining a scanout area smaller than an image area; determining a starting position of a portion of the scanout area; and including, within a long packet of a serial image data transmission, (a) image data from the scanout area portion, and (b) a coordinate indicating the determined starting position of the scanout area portion.
- the coordinate may be paired with a further coordinate indicating line number.
- Region of interest information may be embedded corresponding to the defined scanout area. Some embodiments do not transmit portions of the image area outside the defined scanout area.
- the defined scanout area may for example be elliptical, rectangular, or of arbitrary shape.
- a system may subsequently transmit a further long packet different from the first-mentioned long packet, the further long packet not including any coordinate and containing image data to be located using the coordinate indicating the determined starting position of the scanout area portion that was included in the first-mentioned long packet.
- Another non-limiting implementation provides a system for sparse scanout transmission comprising: an image sensor configured to select a scanout area smaller than an image area, and to determine a starting position of a line corresponding to a portion of the selected scanout area; and an interface configured to include, within a long packet of a serialized image data interface transmission, a coordinate indicating the starting position of the line corresponding to selected scanout image data contained in the long packet.
- the coordinate may be paired with a further coordinate indicating line number.
- the interface may be configured to embed region of interest information corresponding to the selected scanout area.
- the interface may be configured to not transmit portions of the image area outside the selected scanout area.
- the selected scanout area may be elliptical, rectangular, or of arbitrary shape.
- the interface is configured to subsequently transmit a further long packet not including a coordinate, the further long packet containing image data to be located using a previously-transmitted coordinate.
- a sparse scanout receiver comprising: an interface configured to receive a long packet including a coordinate indicating a starting position of a line corresponding to scanout image data contained in the long packet; the interface being further configured to receive a further long packet not including a coordinate indicating a starting position of a line corresponding to further scanout image data contained in the further long packet; and a processor operatively coupled to the interface, the processor configured to apply the coordinate to a horizontal starting position of the further scanout image data while incrementing a line designator for the first-mentioned scanout image data to obtain a line designator for the further scanout image data.
- PH packet header
- Region of interest information may be embedded in further long packets. In some implementations, only some long packet payloads contain coordinate pairs.
- FIG. 1 illustrates a schematic block diagram of a system 100 for generating video, according to a non-limiting embodiment.
- System 100 includes one or a plurality of cameras 101 , and a host or other device that in some instances may be or include for example a mobile device.
- the one or plurality of cameras 101 may be operable to capture video frames from one or more different directions to generate one-channel or multi-channel video streams.
- the host device may use the generated images for various purposes including but not limited to for example pattern recognition for self-driving vehicles having many high resolution cameras imaging the self-driving vehicle from all sides and often using fisheye lenses (see discussion below).
- the mobile device includes a system on a chip (SOC) processor containing a CPU (central processing unit) 102 and a GPU (graphics processing unit) 103 which are integrated within or otherwise connected to the SOC.
- SOC system on a chip
- GPU 103 may have higher capability to perform floating point arithmetic and/or parallel arithmetic than CPU 102 .
- the CPU 102 may be configured to issue to the GPU 103 instructions to process the one-channel or multi-channel video streams by using e.g., parallel computing.
- the GPU 103 may process a large amount of image data in parallel for any desired purpose including pattern recognition, image compression, or the like.
- Other implementations such as general purpose computing hardware executing software instructions stored in non-transitory memory, special purpose or application specific hardware implementations, server implementations, gate array implementations, and other variations are contemplated within the scope of this disclosure.
- the one or a plurality of cameras 101 may each include a CMOS sensor and an associated CMOS sensor and camera control interface (“CCI”) or other interface for transferring a corresponding video stream to the GPU 103 .
- the CCI may operate based on the Mobile Industry Processor Interface (“MIPI”) standard such as CSI-2 v1.3 or any version or improvements on that standard and associated protocol.
- MIPI Mobile Industry Processor Interface
- the CSI-2 Specification defines standard data transmission and control interfaces between the camera as a peripheral device and a host processor, which is typically a baseband, application engine.
- the MIPI CSI-2 protocol contains transport and application layers and natively supports various physical layers (PHYs) such as C-PHY, D-PHY, or combo C/D-PHY.
- the CCI camera control interface for both physical layer options is bi-directional and compatible with the I2C, SPI and other serial communications standards.
- the cameras 101 and camera interfaces to memory devices 104 , 105 each provide hardware and/or software to operate in accordance with this standardized protocol, and connections between them are structured to comply with the standardized protocol.
- the MIPI standard provides multiple physical layers including D-PHY and C-PHY.
- D-PHY as used in CSI-2 is a unidirectional differential interface with one 2-wire forwarded clock lane and one or more 2-wire data lanes.
- C-PHY consists of one or more unidirectional 3-wire serial data lanes or “trios”, each with its own embedded clock.
- a combination PHY is also possible. The way these D-PHY and C-PHY approaches have been used in the past generally has been based on an assumption of a rectangular readout.
- FIG. 2A shows a MIPI CSI-2 standard transmission protocol include initial and ending “short” packets used for frame synchronization. These “short” packets contain a small amount of metadata and no image data, and are used primarily to indicate Frame start and Frame end. Image data is transmitted in the payload of one or more “long” packets. Each “long” packet is delimited by an initial Packet Header (“PH”) that signals that a long packet is to follow, and indicates the packet length of the image data payload to follow (this length is variable). A Packet Footer (“PF”) is transmitted after the image data payload. As can be seen in the diagram, significant time can be required to transition from idle mode to transmitting a short packet. See also e.g., the MIPI Alliance Specification For Camera Serial Interface 2 CSI-2, “MIPI Alliance Specification for D-PHY” (Sep. 22, 2009, pp. 1-123) and other information available at www.mipi.org.
- MIPI Alliance Specification For Camera Serial Interface 2 CSI-2 MI
- the camera(s) 101 may be implemented using a rectangular CMOS, CCD or other image sensor typically comprising or including a rectangular array of image cells.
- Each cell in the array accumulates charge relating to the intensity of exposure to light for a certain duration, including (for color sensors) the intensities of different colors of light (e.g., Red, Green and Blue).
- the number of image cells of the CMOS sensor determines the resolution of the image.
- High resolution image sensors can have millions of image cells and sometimes multiple wells per photodiode, for multiple exposures, resulting in large quantities of captured data.
- a typical camera constructed using a digital image sensor as described above typically includes a lens that focusses light onto an image plane of the image sensor.
- the optical characteristics of the lens will determine in part the characteristics of the image the image sensor captures.
- Different lenses can be used for different applications. For example, telephoto lenses having long focal lengths can be used to capture images of distant objects. Telephoto lenses have a narrow field of view, meaning that while they tend to magnify the size of distant objects, they only capture a small portion of the total field of view as compared to the naked eye.
- Wide angle lenses in contrast have short focal lengths and wide fields of view. Wide angle lenses can be used to capture images that have more objects than can be seen by the naked eye without turning the head.
- fisheye lens An extreme form of wide angle lens is the so-called “fisheye” lens.
- a fisheye lens is an ultra wide angle lens that has a very wide field of view, e.g., in excess of 100 degrees.
- Fisheye lenses can be useful when it is desirable to image wide views (e.g., the entire sky, a wide view of the road ahead, a sweeping landscape, etc.).
- Fisheye lenses for automotive applications provide an advantage of a wider field of view from a single camera, meaning that fewer image sensors can achieve full coverage of the same area.
- Other applications that benefit from fisheye imaging include telepresence and surveillance/intelligent video analytics, where a single stationary camera may be used to capture an entire room or other space.
- such fisheye lenses distort the image based on a mapping function or projection.
- the fisheye lens typically comprises a circular piece of glass or other material with high refractive index
- the image the fisheye lens projects onto the image sensor plane is typically non-rectangular and usually in the shape of a circle or an ellipse.
- the lens projects a circular or elliptical image which is fully contained within a rectangular sensor array.
- a so-called “full frame” fisheye camera may fill the entire rectangular frame with exposed image data, but some of the fisheye image will fall outside of the rectangular frame.
- FIGS. 3A and 3B show a basic Frame configuration assuming non frame memory and Line based read-out.
- FIG. 3A shows three readout areas: A, B and C, and
- FIG. 3B shows example transmission of long packets containing the image data corresponding to those readout areas (each long packet being preceded by a packet header PH).
- areas A and B overlap each other, whereas area C does not overlap either area A or area B.
- each of areas A, B and C is rectangular but need not be in general.
- Such areas A, B and C could in some applications be localized regions of interest, e.g., where objects of interest have been previously detected or are otherwise known to exist.
- the image sensor may have a local face detection circuit that detects all faces in the image, and the image sensor sends the host only data corresponding to detected faces.
- the host can (for whatever reason) command the camera to send only specified portions of the captured image, or the camera could use conventional segmentation processes to detect those portions of the captured image that are not dark and transmit only those non-dark portions.
- the upper left-hand corner of region A has the address/location (xa, ya), and the upper left-hand corner of region C has the address/location (xc, yc).
- One approach followed by the example non-limiting embodiments is to transmit the location where the image sensor readout process is going to transmit in an efficient fashion. If the readout will be rectangular, then it is possible to save bandwidth by not transmitting coordinates for every packet.
- the X and Y coordinates are included in packets with discontinuities, and the Packet Headers specify whether coordinates are included in MIPI long packets. If no coordinates are specified, the next packet starts at the same X coordinate as the previous one, but on the next line.
- readout of area A begins by transmitting a packet header PH indicating the long packet contains a coordinate pair and the length of the long packet payload, followed by a pair of coordinates indicating the upper lefthand corner location (xa, ya) of area A, followed by the first horizontal line of image data of area A.
- the receiver sets the xa coordinate as the default x coordinate position for all subsequently-received lines of image data that are not preceded by a new coordinate pair (see FIG. 4A ), and automatically increments the y coordinate to the next successive line for each next line of image data received without a new coordinate pair (i.e., based on the assumption that the readout is being performed as a raster scan in sequential-line order) (see FIG. 4B ).
- the long packet PH for this next successive line indicates that no coordinate pair is being sent, and indicates the length of the next long packet.
- the transmitter transmits a Packet Header (indicative of a start of a new line in this example) indicating that the long packet does not contain any new coordinate pair, followed by the next successive line of image data A 1 of area A, and the receiver understands that it is to store and/or display and/or process that next line of image data at a location corresponding to the same default starting xa coordinate, but at an automatically incremented y coordinate (e.g., ya+1, ya+2, etc.) line position. See FIGS.
- DSP digital signal processor
- the Packet Footer and/or Packet Header in the MIPI standard CSI-2 protocol act as a “carriage return” of sorts in a text rendering (typewriter) analogy, with the previously-transmitted coordinate pair determining the x location (margin) to which the “carriage” is to be returned.
- each long packet may or will contain an explicit coordinate pair locating the position of the line segment of image data provided by the long packet's payload.
- the transmitter and receiver are not constrained to strictly observe scan line order and transmit the entirety of one line before transmitting any of a subsequent line, the transmitter could make use of the efficiencies of the approach described herein to transmit the one rectangular (or other shaped) region comprising multiple successive lines using a first starting coordinate pair, and then subsequently transmit a second rectangular (or other shaped) region using a second starting coordinate pair even when the two regions occupy different portions of the same horizontal line(s) (e.g., readout will then not be in strictly scanline order).
- the first image data sent for a frame may be preceded by a coordinate pair (e.g., even if the entire frame contents are being sent, in which case the coordinate pair would be 0,0 to indicate the upper left corner of the frame).
- a coordinate pair e.g., even if the entire frame contents are being sent, in which case the coordinate pair would be 0,0 to indicate the upper left corner of the frame.
- the protocol permits each long packet to include a coordinate pair, although an optimization is that a coordinate pair does not need to be sent on a “next” line if the image data for that line begins at the same x position as for the previous line.
- the receiver will use a default x starting position as the x coordinate of the last coordinate pair sent, and will set the location of the next image data to be sent on the line immediately subsequent to the line just previously sent (i.e., the y coordinate is incremented by one).
- area B overlaps area A and appears to the lower right of it.
- the transmitter appends the first line of image data of area B to the appropriate line of image data for area A and transmits both the A area image data for that line followed by the B area image data for that line in the same long packet without any new coordinate pair.
- the receiver stores (and/or displays) the concatenated A+B image data line on the appropriate horizontal line. Because MIPI supports long packets of variable length, such a concatenated line of A area image data+B area image data can have a length that is independent of the length of the immediately previous line containing A area image data only.
- the next long packet to be transmitted can contain the next line of A area image data+B area image data, and so on.
- This scenario is indistinguishable from a single area A having the same of area A unioned with area B, i.e., the information transmitted or received does not reveal area A and area B as being two discrete areas of regions of interest.
- the first concatenated line of image data AN ⁇ 1+B 1 appears to be a long single line of image data having a horizontal length corresponding in FIG. 3A to Area A+Area B.
- the overall process for transmitting area A+area B corresponds to transmitting a single non-rectangular area that is the union of area A and area B.
- areas A, B and C as separate areas of interest thus exists only in theory.
- No designation of “A region”, “B region” or “C region” is transmitted, in some example embodiments; there are only pixels that are sent and pixels that are not sent.
- each camera can have many, e.g., 64 or more regions of interest and there are many, e.g., 12 cameras
- the total number of regions of interest for the receiver to keep track of becomes large, e.g., 768 regions of interest.
- the approach disclosed herein is very low cost and is compatible with the standard way in which serialized image transmission systems are designed.
- region of interest information can be added (as discussed below).
- area A has N lines with the last line being AN.
- the transmitter transmits concatenated line AN ⁇ 2+B 1 , AN ⁇ 1+B 2 , and AN+BK.
- AN ⁇ 2+B 1 the last line AN of area A has been transmitted, there is no more image data from subsequent lines “below” area A in FIG. 3A .
- the transmitter sends a new coordinate pair (xab, yab) that corresponds not to the “corner” point of area B, but rather to the starting location of the first line (BK+1) of area B that is not concatenated with a line of area A.
- the transmitter follows this new coordinate pair with the image data of area BK+1 corresponding to that line yab, and the receiver interprets the x coordinate xab to indicate the starting x location to store and/or display that BK+1 line of image data.
- the same process as described above continues for each subsequently-transmitted line of image data of area B, until the last line BLAST has been transmitted and received.
- FIG. 3A shows that area C is a rectangular area that has an upper lefthand corner position of (xc, yc). Similar to the example above for transmitting area A, the transmitter transmits that corner coordinate pair (xc, yc) and then transmits the first line of area C image data C 1 , followed by the second line C 2 , . . . and the last line CLAST, each line of image data in its own respective long packet.
- the long packet delimiters are interpreted as virtual “carriage returns”, but in this case the virtual “carriage” returns to the x coordinate position xc established as the default by the earlier transmission of the (xc, yc) coordinate pair, as FIGS. 4A and 4B describe.
- the examples above use an extended packet header PH to indicate whether the long packet contains a coordinate pair, other embodiments do not change the packet header PH to include such information.
- the image sensor and the SoC e.g., serializers, deserializers and aggregators, that change the format of the data.
- the image sensor and the SoC might share information beyond that of a legacy MIPI standard, but intervening components would nevertheless expect the data stream to strictly comply with that legacy MIPI standard.
- the packet header PH should maintain its legacy form, although the image data payload could be extended without compromising compatibility.
- embedded delimiters within the long packet payload may be used to indicate the presence of a coordinate pair, or the coordinate pair could be encoded in a way that makes it easily distinguishable from any possible or practical image data, or out of band metadata could be used to indicate coordinate pairs.
- FIGS. 5A and 5B show the same implementation above applied to a non-rectangular area such as an ellipse depicted in FIG. 5A .
- the system can use this capability to start each sequential line of image data at a different x position. This allows the transmitter to transmit image data corresponding to a piecewise linear ellipse or any other shape of arbitrary complexity through transmission of a coordinate pair for each line followed by the variable-length image data for that line. Trapezoidal or other lower-cost approximations could be used for further increases in efficiency.
- Such capabilities come “for free” with the sparse scanout capabilities described above.
- pixel coordinates for crop regions on the sensor in non-rectangular crops can be computed with a low-cost approximation, for instance a piece-wise linear curve for start and stop X coordinates.
- Example improvements to the current MIPI CSI-2 standard supports region-of-interest identification to designate one or more subsets of a captured image as regions of interest.
- One example non-limiting approach adds some extensions/options on the current Frame/packet header/footer to provide minimal changes from the current CSI-2 format.
- it is possible to start with one or more simple region-of-interest shapes e.g., a rectangular shape
- region-of-interest shapes e.g., a rectangular shape
- arbitrarily-shaped regions-of-interest would be useful to consider.
- a further example non-limiting design allows space in the design for arbitrary shapes, even if not yet implemented.
- Such new scheme in one example non-limiting embodiment may need or use an additional format beyond the existing region-of-interest sensor (e.g., industrial sensors) which supports only “Still image”.
- a smart region-of-interest sensor may expand this to “Video Stream” for Internet-of-Things (“IoT”) applications. Overlapped regions-of-interest may be delivered efficiently for IoT applications.
- FIG. 6 shows a further example in which region of interest identification information is included to provide a standardized embedded data format for a region of interest.
- the transmitter transmits an additional long packet “ROI A EMB data” (region of interest A embedded data) before transmitting the image data for region A, and similarly transmits embedded ROI (region of interest) data for region B before transmitting image data for region B.
- the “ROI B EMB data” long packet is transmitted just before the transmitter transmits image data AN ⁇ 2 for region A concatenated with image data B 1 .
- the receiver interprets this embedded ROI data packet as applying to the image data B.
- the transmitter can transmit one embedded data packet per region of interest.
- the embedded region of interest data can occur anywhere in or after the frame.
- the embedded region of interest data is sent in a long packet, and accordingly can provide various metadata for the region of interest, including any or all of the following:
- FIG. 7 shows that regions of interest can be reorganized by the interface for legacy CSI system support in a way that is independent of the sparse scanout approach discussed above. If the image sensor is capable of rearranging image elements for transmission rather than transmitting them in strictly sequential line order, the sensor rearranges them into one logical large frame, and then transmits the ROI information indicating where each ROI was physically found on the sensor and where the ROI is logically placed in the image the transmitter is sending. From the receiver's perspective, the received image data looks like a single, very tall frame (see right-hand side of FIG. 7 ) because the coordinates are sent out of band along with the embedded ROI metadata. The ROI metadata does not need to be processed immediately upon receipt.
- the CPU or GPU can later examine the ROI data to determine where each ROI should be placed in the received frame. For example, this information can be transmitted using legacy CSI protocol in a way that does not require sparse scanout as described above. This approach is also 100% compatible with SoC implementations that do not support sparse scanout.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
Description
-
- identified object
- Exposure/gain information
- Confidence
- Priority
- Other extensions: e.g., CPU sensing acceleration, etc.
Claims (18)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/185,265 US11039092B2 (en) | 2017-11-15 | 2018-11-09 | Sparse scanout for image sensors |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762586396P | 2017-11-15 | 2017-11-15 | |
US16/185,265 US11039092B2 (en) | 2017-11-15 | 2018-11-09 | Sparse scanout for image sensors |
Publications (2)
Publication Number | Publication Date |
---|---|
US20190149751A1 US20190149751A1 (en) | 2019-05-16 |
US11039092B2 true US11039092B2 (en) | 2021-06-15 |
Family
ID=66431618
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/185,265 Active US11039092B2 (en) | 2017-11-15 | 2018-11-09 | Sparse scanout for image sensors |
Country Status (1)
Country | Link |
---|---|
US (1) | US11039092B2 (en) |
Families Citing this family (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10686996B2 (en) | 2017-06-26 | 2020-06-16 | Facebook Technologies, Llc | Digital pixel with extended dynamic range |
US10598546B2 (en) | 2017-08-17 | 2020-03-24 | Facebook Technologies, Llc | Detecting high intensity light in photo sensor |
US12034015B2 (en) | 2018-05-25 | 2024-07-09 | Meta Platforms Technologies, Llc | Programmable pixel array |
US11906353B2 (en) | 2018-06-11 | 2024-02-20 | Meta Platforms Technologies, Llc | Digital pixel with extended dynamic range |
US11463636B2 (en) | 2018-06-27 | 2022-10-04 | Facebook Technologies, Llc | Pixel sensor having multiple photodiodes |
US10897586B2 (en) | 2018-06-28 | 2021-01-19 | Facebook Technologies, Llc | Global shutter image sensor |
US11647284B2 (en) * | 2018-08-20 | 2023-05-09 | Sony Semiconductor Solutions Corporation | Image processing apparatus and image processing system with image combination that implements signal level matching |
US10931884B2 (en) | 2018-08-20 | 2021-02-23 | Facebook Technologies, Llc | Pixel sensor having adaptive exposure time |
US11956413B2 (en) | 2018-08-27 | 2024-04-09 | Meta Platforms Technologies, Llc | Pixel sensor having multiple photodiodes and shared comparator |
US10796402B2 (en) * | 2018-10-19 | 2020-10-06 | Tusimple, Inc. | System and method for fisheye image processing |
US11595602B2 (en) | 2018-11-05 | 2023-02-28 | Meta Platforms Technologies, Llc | Image sensor post processing |
WO2020116213A1 (en) * | 2018-12-06 | 2020-06-11 | ソニーセミコンダクタソリューションズ株式会社 | Reception device and transmission device |
US11962928B2 (en) | 2018-12-17 | 2024-04-16 | Meta Platforms Technologies, Llc | Programmable pixel array |
US11888002B2 (en) | 2018-12-17 | 2024-01-30 | Meta Platforms Technologies, Llc | Dynamically programmable image sensor |
TWI868088B (en) * | 2018-12-20 | 2025-01-01 | 日商索尼半導體解決方案公司 | Communication device, communication method and program |
US11218660B1 (en) | 2019-03-26 | 2022-01-04 | Facebook Technologies, Llc | Pixel sensor having shared readout structure |
US11943561B2 (en) | 2019-06-13 | 2024-03-26 | Meta Platforms Technologies, Llc | Non-linear quantization at pixel sensor |
TW202107892A (en) * | 2019-07-31 | 2021-02-16 | 日商索尼半導體解決方案公司 | Transmission device, reception device, and communication system |
US12108141B2 (en) | 2019-08-05 | 2024-10-01 | Meta Platforms Technologies, Llc | Dynamically programmable image sensor |
US11936998B1 (en) | 2019-10-17 | 2024-03-19 | Meta Platforms Technologies, Llc | Digital pixel sensor having extended dynamic range |
US11935291B2 (en) | 2019-10-30 | 2024-03-19 | Meta Platforms Technologies, Llc | Distributed sensor system |
US11948089B2 (en) * | 2019-11-07 | 2024-04-02 | Meta Platforms Technologies, Llc | Sparse image sensing and processing |
US12141888B1 (en) | 2019-12-18 | 2024-11-12 | Meta Platforms Technologies, Llc | Dynamic and hierarchical image sensing and processing |
US11902685B1 (en) | 2020-04-28 | 2024-02-13 | Meta Platforms Technologies, Llc | Pixel sensor having hierarchical memory |
US11825228B2 (en) | 2020-05-20 | 2023-11-21 | Meta Platforms Technologies, Llc | Programmable pixel array having multiple power domains |
US11910114B2 (en) | 2020-07-17 | 2024-02-20 | Meta Platforms Technologies, Llc | Multi-mode image sensor |
JPWO2022050057A1 (en) * | 2020-09-01 | 2022-03-10 | ||
US12075175B1 (en) | 2020-09-08 | 2024-08-27 | Meta Platforms Technologies, Llc | Programmable smart sensor with adaptive readout |
US11956560B2 (en) | 2020-10-09 | 2024-04-09 | Meta Platforms Technologies, Llc | Digital pixel sensor having reduced quantization operation |
US11935575B1 (en) | 2020-12-23 | 2024-03-19 | Meta Platforms Technologies, Llc | Heterogeneous memory system |
US12022218B2 (en) | 2020-12-29 | 2024-06-25 | Meta Platforms Technologies, Llc | Digital image sensor using a single-input comparator based quantizer |
US12008259B1 (en) * | 2021-09-29 | 2024-06-11 | Ethernovia Inc. | Data processing and transmission using hardware serialization and deserialization functions |
US12244936B2 (en) | 2022-01-26 | 2025-03-04 | Meta Platforms Technologies, Llc | On-sensor image processor utilizing contextual data |
JP2024090345A (en) * | 2022-12-23 | 2024-07-04 | ソニーセミコンダクタソリューションズ株式会社 | Photodetection device and method for controlling photodetection device |
US20250080957A1 (en) * | 2023-08-30 | 2025-03-06 | Vilnius Gediminas Technical University | System for collecting and synchronizing data from wireless wearable sensors and method for synchronizing data from wireless wearable sensors |
CN118032176B (en) * | 2024-04-10 | 2024-07-02 | 国网山东省电力公司潍坊供电公司 | A tactile sensor, sensing method and storage medium for inspection robot |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6560285B1 (en) * | 1998-03-30 | 2003-05-06 | Sarnoff Corporation | Region-based information compaction as for digital images |
US20160301870A1 (en) * | 2015-04-13 | 2016-10-13 | Canon Kabushiki Kaisha | Image processing apparatus, image capturing apparatus, control method of image processing apparatus, and storage medium |
US20170251189A1 (en) * | 2014-08-28 | 2017-08-31 | Sony Corporation | Transmitting apparatus, transmitting method, receiving apparatus, and receiving method |
-
2018
- 2018-11-09 US US16/185,265 patent/US11039092B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6560285B1 (en) * | 1998-03-30 | 2003-05-06 | Sarnoff Corporation | Region-based information compaction as for digital images |
US20170251189A1 (en) * | 2014-08-28 | 2017-08-31 | Sony Corporation | Transmitting apparatus, transmitting method, receiving apparatus, and receiving method |
US20160301870A1 (en) * | 2015-04-13 | 2016-10-13 | Canon Kabushiki Kaisha | Image processing apparatus, image capturing apparatus, control method of image processing apparatus, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
US20190149751A1 (en) | 2019-05-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11039092B2 (en) | Sparse scanout for image sensors | |
KR102509132B1 (en) | Video transmitter and video receiver | |
CN111295887B (en) | Transmitting apparatus and method with region-of-interest mode selection | |
WO2023024421A1 (en) | Method and system for splicing multiple channels of images, and readable storage medium and unmanned vehicle | |
KR102641559B1 (en) | transmitting device | |
US20200007794A1 (en) | Image transmission method, apparatus, and device | |
US20210281749A1 (en) | Image processing apparatus and image processing system | |
US20230362307A1 (en) | Transmitting apparatus, receiving apparatus, and transmission system | |
CN113099133A (en) | Method for transmitting high-bandwidth camera data by serial deserializer link | |
US20240406575A1 (en) | Image sensor, data processing device, and image sensor system | |
US8228395B2 (en) | Processing image frames in different formats with reduced memory requirements in digital still cameras | |
CN112492247B (en) | Video display design method based on LVDS input | |
CN113170044B (en) | Receiving apparatus and transmitting apparatus | |
US11297279B2 (en) | Transmission device, reception device, and communication system | |
CN117425091B (en) | Image processing method and electronic device | |
US20250095608A1 (en) | Display screen adjustment method, storage medium and terminal device | |
CN112019808A (en) | An intelligent identification device for vehicle real-time video information based on MPSoC | |
CN219812216U (en) | 10 hundred million pixel image sensor | |
KR20190014777A (en) | System and method of processing image signal | |
CN112492298A (en) | Method and device for collecting image | |
KR20230007792A (en) | Clowd server, system comprising clowd server and client device and controlling method thereof | |
CN116456179A (en) | Image acquisition and processing chip | |
CN118044221A (en) | Image sensor, data processing device, and image sensor system | |
CN115086590A (en) | Image processing and fusion computing device based on ground unmanned platform | |
CN117579952A (en) | Image acquisition device, data reading method, control assembly and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NVIDIA CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:WISE, JOSHUA;REEL/FRAME:047460/0342 Effective date: 20181107 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |