CN116193129A - Data processing method and device based on image sensor and image processing system - Google Patents
Data processing method and device based on image sensor and image processing system Download PDFInfo
- Publication number
- CN116193129A CN116193129A CN202310085641.0A CN202310085641A CN116193129A CN 116193129 A CN116193129 A CN 116193129A CN 202310085641 A CN202310085641 A CN 202310085641A CN 116193129 A CN116193129 A CN 116193129A
- Authority
- CN
- China
- Prior art keywords
- pixel
- pixels
- group
- space
- time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 61
- 238000003672 processing method Methods 0.000 title claims abstract description 29
- 238000005070 sampling Methods 0.000 claims abstract description 101
- 239000011159 matrix material Substances 0.000 claims description 58
- 238000000034 method Methods 0.000 claims description 36
- 238000003860 storage Methods 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 17
- 230000006835 compression Effects 0.000 claims description 15
- 238000007906 compression Methods 0.000 claims description 15
- 230000002123 temporal effect Effects 0.000 claims description 11
- 230000002829 reductive effect Effects 0.000 abstract description 17
- 238000010586 diagram Methods 0.000 description 32
- 230000008859 change Effects 0.000 description 31
- 230000006870 function Effects 0.000 description 11
- 230000008569 process Effects 0.000 description 10
- 230000005540 biological transmission Effects 0.000 description 6
- 238000009826 distribution Methods 0.000 description 6
- 230000002902 bimodal effect Effects 0.000 description 4
- 238000003491 array Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000036961 partial effect Effects 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The disclosure provides a data processing method and device based on an image sensor and an image processing system, and belongs to the technical field of computers. The processing method comprises the following steps: acquiring space-time signals of a plurality of pixels through an image sensor; grouping the pixels to obtain a plurality of pixel groups; determining a group address of each pixel group and macro addresses of a plurality of pixel groups according to the pixel addresses in each pixel group; and encoding according to the sampling period corresponding to the space-time signal, the group address of each pixel group, the macro addresses of a plurality of pixel groups and the space-time signal of the pixels in each pixel group to obtain a signal data packet of the plurality of pixel groups. According to the embodiment of the disclosure, the coding redundancy can be reduced, and meanwhile, the signal information of the pixels is well reserved.
Description
Technical Field
The disclosure relates to the technical field of computers, and in particular relates to a data processing method and device based on an image sensor and an image processing system.
Background
Event cameras (Event cameras), also known as dynamic vision receptors (DVS), are a new type of imaging system. Compared with the traditional camera which uses a shutter to control the frame rate, all pixels record light intensity according to frames, the event camera is sensitive to the light intensity change rate, each pixel independently records the change of the light intensity logarithmic value at the pixel, and when the change exceeds a threshold value, a positive pulse or a negative pulse is generated. Because of the asynchronous processing characteristic of the event camera, the event camera is not limited by a fast threshold, has extremely high time resolution, and has natural adaptability to tasks such as motion monitoring by combining the characteristic of sensitivity to light intensity change.
Disclosure of Invention
The disclosure provides a data processing method and device based on an image sensor, an image processing system, electronic equipment and a computer readable storage medium.
In a first aspect, the present disclosure provides a data processing method based on an image sensor, the data processing method including: acquiring space-time signals of a plurality of pixels through an image sensor; grouping the pixels to obtain a plurality of pixel groups; determining the group address of each pixel group and the macro addresses of a plurality of pixel groups according to the pixel addresses in each pixel group; and encoding according to the sampling period corresponding to the space-time signal, the group address of each pixel group, the macro addresses of a plurality of pixel groups and the space-time signal of the pixels in each pixel group to obtain signal data packets of a plurality of pixel groups.
In a second aspect, the present disclosure provides an image sensor-based data processing apparatus comprising: an acquisition module for acquiring spatiotemporal signals of a plurality of pixels by an image sensor; the grouping module is used for grouping the pixels to obtain a plurality of pixel groups; the determining module is used for determining the group address of each pixel group and the macro addresses of a plurality of pixel groups according to the pixel addresses in each pixel group; the coding module is used for coding according to the sampling period corresponding to the space-time signal, the group address of each pixel group, the macro addresses of a plurality of pixel groups and the space-time signal of the pixels in each pixel group to obtain signal data packets of a plurality of pixel groups.
In a third aspect, the present disclosure provides an image processing system comprising: an image sensor based data processing device and at least one image sensor; the image sensor is used for acquiring space-time signals of a plurality of pixels based on a preset sampling period; the image sensor-based data processing device is configured to execute the image sensor-based data processing method according to any one of the embodiments of the present disclosure.
In a fourth aspect, the present disclosure provides an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores one or more computer programs executable by the at least one processor, the one or more computer programs being executable by the at least one processor to enable the at least one processor to perform the image sensor-based data processing method described above.
In a fifth aspect, the present disclosure provides an electronic device comprising: a plurality of processing cores; and a network on chip configured to interact data between the plurality of processing cores and external data; one or more of the processing cores stores one or more instructions, and one or more of the instructions are executed by one or more of the processing cores, so that one or more of the processing cores can execute the data processing method based on the image sensor.
In a sixth aspect, the present disclosure provides a computer readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor/processing core, implements the above-described image sensor-based data processing method.
According to the embodiment provided by the disclosure, the space-time signals of a plurality of pixels are obtained through the image sensor, and the space-time signals can be used for determining the signal change condition of the pixels in the time dimension and the signal difference condition between the current pixel and the adjacent pixels, so that the signal information of the pixels is more comprehensively reflected; secondly, grouping a plurality of pixels to obtain a plurality of pixel groups, fully considering the locality of the spatial-temporal signal change of the pixels, and dividing the pixels with similar change into one pixel group through grouping, so that the extra redundancy brought by the address information of each change event coding is reduced in the subsequent coding; according to the pixel addresses in each pixel group, the group address of each pixel group and the macro addresses of a plurality of pixel groups are determined, and when the subsequent coding is carried out, compared with the coding by adopting each pixel address, the coding is carried out by using the macro address and the group address with larger granularity, so that the redundancy of address coding can be reduced; finally, coding is carried out according to the sampling period corresponding to the space-time signal, the group address of each pixel group, the macro address of a plurality of pixel groups and the space-time signal of the pixels in each pixel group, so as to obtain signal data packets of the plurality of pixel groups, reduce the time redundancy and address redundancy of coding, and simultaneously retain the signal information of each pixel, thereby recovering corresponding images according to the signal data packets, and executing various tasks based on the images.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure, without limitation to the disclosure. The above and other features and advantages will become more readily apparent to those skilled in the art by describing in detail exemplary embodiments with reference to the attached drawings, in which:
FIG. 1 is a diagram of encoding an event camera according to the related art;
FIG. 2 is a flowchart of a data processing method based on an image sensor according to an embodiment of the disclosure;
fig. 3 is a schematic diagram of pixel distribution corresponding to an image sensor according to an embodiment of the disclosure;
fig. 4 is a schematic diagram of a sampling operation process based on an image sensor according to an embodiment of the disclosure;
fig. 5 is a schematic diagram of a sampling operation process based on an image sensor according to an embodiment of the disclosure;
FIG. 6 is a schematic diagram of grouping of pixels according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of grouping of pixels according to an embodiment of the present disclosure;
fig. 8 is a schematic diagram of a signal packet according to an embodiment of the disclosure;
FIG. 9 is a block diagram of an image sensor-based data processing apparatus according to an embodiment of the present disclosure;
FIG. 10 is a block diagram of an image processing system provided by an embodiment of the present disclosure;
FIG. 11 is a block diagram of an electronic device provided by an embodiment of the present disclosure;
fig. 12 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
For a better understanding of the technical solutions of the present disclosure, exemplary embodiments of the present disclosure will be described below with reference to the accompanying drawings, in which various details of the embodiments of the present disclosure are included to facilitate understanding, and they should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Embodiments of the disclosure and features of embodiments may be combined with each other without conflict.
As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Because DVS and DAVIS cameras both adopt asynchronous sampling and outputting methods, signals are generally encoded and output based on address event expressions (Address Event Representation, AER), so that the signal sampling and outputting methods are difficult to be compatible with common output protocols.
Fig. 1 is a coding schematic diagram of an event camera according to the related art. Referring to fig. 1, first, a pixel at an ADDRESS (x, y) issues a pulse signal and polarity request to an ARBITER (ARBITER), and after the ARBITER passes the request of the signal, an ADDRESS ENCODER (ADDRESS code) and handshake logic (HANDSHAKE LOGIC) encode the ADDRESS information of the pixel, the polarity of the pulse signal and a current timestamp into an AER packet to be issued to an off-chip receiver. Wherein the pulse signal polarity indicates whether the pixel brightness is increased or decreased compared to the previous sample.
In summary, firstly, whether a DVS camera or a DAVIS camera adopts an asynchronous AER output protocol, and the asynchronous mode is difficult to be compatible with a common data receiving and processing chip and must be processed by a special asynchronous chip (which can be an on-chip or an off-chip); secondly, from the aspects of coding mode and efficiency, AER data packet format codes for each pixel, and only 1 data bit in each data packet represents the actual pixel value, and other bits represent the abscissa and the ordinate and the time stamp, so that a large amount of data bits are wasted; in addition, the AER packet format is encoding for a time pulse signal, and it is difficult to reflect signal information of pixels from a spatial dimension.
In view of the above, embodiments of the present disclosure provide a data processing method and apparatus based on an image sensor, and an image processing system, which at least solve at least one of the above technical problems.
According to the embodiment provided by the disclosure, the space-time signals of a plurality of pixels are obtained through the image sensor, and the space-time signals can be used for determining the signal change condition of the pixels in the time dimension and the signal difference condition between the current pixel and the adjacent pixels, so that the signal information of the pixels is more comprehensively reflected; secondly, grouping a plurality of pixels to obtain a plurality of pixel groups, fully considering the locality of the spatial-temporal signal change of the pixels, and dividing the pixels with similar change into one pixel group through grouping, so that the extra redundancy brought by the address information of each change event coding is reduced in the subsequent coding; according to the pixel addresses in each pixel group, the group address of each pixel group and the macro addresses of a plurality of pixel groups are determined, and when the subsequent coding is carried out, compared with the coding by adopting each pixel address, the coding is carried out by using the macro address and the group address with larger granularity, so that the redundancy of address coding can be reduced; finally, coding is carried out according to the sampling period corresponding to the space-time signal, the group address of each pixel group, the macro address of a plurality of pixel groups and the space-time signal of the pixels in each pixel group, so as to obtain signal data packets of the plurality of pixel groups, reduce the time redundancy and address redundancy of coding, and simultaneously retain the signal information of each pixel, thereby recovering corresponding images according to the signal data packets, and executing various tasks based on the images.
The image sensor-based data processing method according to the embodiments of the present disclosure may be performed by an electronic device such as a terminal device or a server, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, etc., and the method may be implemented by a processor invoking computer readable program instructions stored in a memory. The servers may be independent physical servers, a server cluster consisting of multiple servers, or cloud servers capable of cloud computing.
In a first aspect, embodiments of the present disclosure provide a data processing method based on an image sensor.
Fig. 2 is a flowchart of a data processing method based on an image sensor according to an embodiment of the disclosure. Referring to fig. 2, the data processing method includes:
in step S21, spatiotemporal signals of a plurality of pixels are acquired by an image sensor.
In step S22, a plurality of pixels are grouped to obtain a plurality of pixel groups.
In step S23, the group address of each pixel group and the macro addresses of the plurality of pixel groups are determined from the pixel addresses within each pixel group.
In step S24, encoding is performed according to the sampling period corresponding to the spatio-temporal signal, the group address of each pixel group, the macro addresses of the plurality of pixel groups, and the spatio-temporal signal of the pixels in each pixel group, so as to obtain signal data packets of the plurality of pixel groups.
According to the embodiment of the disclosure, the space-time signals of a plurality of pixels are obtained through the image sensor, and the signal change condition of the pixels in the time dimension can be determined through the space-time signals, and the signal difference condition between the current pixel and the adjacent pixels can be determined, so that the signal information of the pixels is more comprehensively reflected; secondly, grouping a plurality of pixels to obtain a plurality of pixel groups, fully considering the locality of the spatial-temporal signal change of the pixels, and dividing the pixels with similar change into one pixel group through grouping, so that the extra redundancy brought by the address information of each change event coding is reduced in the subsequent coding; according to the pixel addresses in each pixel group, the group address of each pixel group and the macro addresses of a plurality of pixel groups are determined, and when the subsequent coding is carried out, compared with the coding by adopting each pixel address, the coding is carried out by using the macro address and the group address with larger granularity, so that the redundancy of address coding can be reduced; finally, coding is carried out according to the sampling period corresponding to the space-time signal, the group address of each pixel group, the macro address of a plurality of pixel groups and the space-time signal of the pixels in each pixel group, so as to obtain signal data packets of the plurality of pixel groups, reduce the time redundancy and address redundancy of coding, and simultaneously retain the signal information of each pixel, thereby recovering corresponding images according to the signal data packets, and executing various tasks based on the images.
In some alternative implementations, in step S21, the image sensor includes a signal sensor that can sense light intensity, and one image sensor may correspond to a plurality of pixels, which may be arranged in various forms, thereby forming a corresponding pixel array. By means of the spatio-temporal signals of these pixel arrays, a corresponding image can be obtained.
In some alternative implementations, the spatiotemporal signal is a light intensity class signal, and the spatiotemporal signal can be used to characterize at least signal variation information of the pixel in a time dimension and signal difference information in a space dimension, i.e., the spatiotemporal signal of the pixel can reflect the signal information of the pixel in both the time dimension and the space dimension.
In some alternative implementations, the image sensor may also be used to acquire color signals for a plurality of pixels. The color signals of the pixels can reflect the color information of the pixels, and color images with color distribution can be obtained through the color signals of the pixels.
In some alternative implementations, after light irradiates the surface of the object to be photographed and is transmitted through optical paths such as reflection, refraction, etc., the image sensor may capture an original signal (which may involve signal processing such as photoelectric conversion, etc., without limitation), and decompose the original signal into a luminance signal Y and a chrominance signal, where the luminance signal may be used to determine a space-time signal, the chrominance signal may be decomposed to obtain a color difference signal U and a color difference signal V, and the luminance signal Y, the color difference signal U, and the color difference signal V may be subjected to matrix operation to obtain a color signal, i.e., an RGB signal.
In some alternative implementations, two image sensors may be provided, one for acquiring the light intensity signal of the pixel to obtain the spatiotemporal signal of the pixel and the other for acquiring the color signal of the pixel.
Fig. 3 is a schematic diagram of pixel distribution corresponding to an image sensor according to an embodiment of the disclosure. The image sensor includes two types, one for acquiring light intensity signals of pixels and the other for acquiring color signals of pixels. Accordingly, pixels are also classified into two types, one type being light intensity pixels for capturing light intensity signals and the other type being color pixels for capturing color signals, as the light sensing units of the image sensor. After the light intensity signals are obtained through the light intensity pixels, corresponding space-time signals can be obtained through further processing.
As shown in fig. 3, the light intensity pixels and the color pixels are arranged in a cross arrangement in both the row direction and the column direction, four color pixels are arranged around one light intensity pixel, and similarly, four light intensity pixels (except for edge pixels) are arranged around one color pixel. After the light intensity class signal is obtained by the light intensity pixels, a corresponding spatio-temporal signal can be obtained by further processing. In this arrangement, since the light intensity pixels and the color pixels are distributed more uniformly, the resolution of the image obtained based on the spatio-temporal signal is closer to that of the image obtained based on the color signal.
It should be noted that the above distributions of the light intensity pixels and the color pixels are merely examples, and other distributions of the light intensity pixels and the color pixels may be used. For example, the light intensity pixels and the color pixels are arranged in a row-column interval. For example, for an area to be photographed, a plurality of pixels are randomly selected as an area of light intensity pixels, and the remaining area is set as a color area. For example, the region to be photographed is divided into a luminance sensitive region and a color sensitive region in advance, and a larger number of light intensity pixels are set in the luminance sensitive region, and a larger number of color pixels are set in the color sensitive region. In addition, the distribution of the light intensity pixels and the color pixels is also determined according to experience, statistical data, simulation data, and the like, which is not limited by the embodiments of the present disclosure.
In other words, in some optional implementations, the image sensor in the embodiments of the present disclosure is a bimodal sensor, based on which a light intensity image and a color image of an object to be photographed can be obtained, where the light intensity image can show a change condition of the object from a time dimension and a space dimension, and the color image can show the object from a color angle, so that an effect of showing object information from more dimensions is achieved by the bimodal sensor, and image analysis from multiple angles is facilitated.
In some alternative implementations, the spatio-temporal signals of the plurality of pixels are generated and output based on an event-triggered manner. In other words, the plurality of pixels work in an asynchronous sampling and output mode, and a space-time signal is output outwards as long as the signal change intensity of a certain pixel exceeds a preset threshold value. Wherein the signal variation intensity comprises a signal variation intensity in a time dimension and/or a signal variation intensity in a space dimension.
For example, if the difference between the light intensity of a certain pixel at the current time and the light intensity of the pixel at the previous time is greater than a preset threshold, a space-time signal may be output.
In an exemplary embodiment, if the difference between the light intensity of a certain pixel at the current time and the light intensities of surrounding adjacent pixels is a first difference, and the difference between the light intensity of the pixel at the previous time and the light intensity of the periodic pixel is a second difference, a space-time signal may be output outwards when the difference between the first difference and the second difference is greater than a preset threshold.
In some alternative implementations, the spatiotemporal signals of the plurality of pixels are acquired based on a uniform sampling period, in other words, the spatiotemporal signals of the plurality of pixels are acquired in a synchronized manner. The sampling period may be determined according to any one or more of experience, statistics, simulation results, and imaging requirements, which is not limited by embodiments of the present disclosure.
In some alternative implementations, acquiring, by an image sensor, spatiotemporal signals of a plurality of pixels based on a preset sampling period includes: and sampling output data of the image sensor according to the sampling period to obtain space-time signals of a plurality of pixels corresponding to the image sensor. In such an implementation, the operating frequency of the image sensor itself is not of great concern, as long as its output data is sampled according to the sampling period, and thus the operating frequency of the image sensor may not be changed or updated.
In some alternative implementations, acquiring, by an image sensor, spatiotemporal signals of a plurality of pixels based on a preset sampling period includes: in the case that the image sensor outputs data outwards based on the sampling period, the spatiotemporal signals of the corresponding plurality of pixels are obtained according to the output data of the image sensor. In this implementation, the image sensor is operated with a sampling period, so that the spatiotemporal signals of a plurality of pixels can be obtained directly according to the output data of the image sensor.
In summary, the image sensor records the variation of each pixel with respect to the time dimension and the difference of the space dimension through a unified time step (corresponding to a sampling period), thereby realizing the sampling output of global synchronization.
It should be noted that, in the related art, each pixel of the event camera is independently operated asynchronously, and thus, it cannot collect and output signals of a plurality of pixels based on a uniform sampling period. When the asynchronous sampling mode is adopted, the sampling time of each pixel is relatively independent, and corresponding space-time signals need to be received and processed based on an asynchronous chip; when the method works based on a unified sampling period, space-time signals of a plurality of pixels can be acquired uniformly, so that a conventional data receiving chip and a data processing chip can be supported, and as the plurality of pixels have the same sampling time, a global time coding mode can be adopted for the plurality of pixels when the pixels are coded in a time dimension later, independent coding is not needed for each pixel in the time dimension, and coding complexity and redundancy of the time dimension are reduced.
Illustratively, the spatiotemporal signal comprises a light intensity class signal, and the spatiotemporal signal comprises at least a temporal dimension variation amount and a spatial dimension difference amount; the time dimension variation is the variation between the light intensity of the pixel in the current sampling period and the light intensity of the pixel in the last sampling period, and the space dimension difference is the difference between the light intensity of the pixel in the current sampling period and the light intensity of at least one adjacent pixel in the current sampling period. Wherein the adjacent pixels comprise one or more pixels that are positionally adjacent to the current pixel.
Illustratively, the spatio-temporal signal may include an amount of light intensity in addition to the amount of temporal dimensional change and the amount of spatial dimensional difference. Wherein the amount of light intensity is used to reflect the light intensity of the pixel at the current sampling period. Through the current space-time signal, not only the change condition of the pixel and the difference condition between the pixel and the adjacent pixel can be obtained, but also the absolute light intensity condition of the pixel can be obtained, which is equivalent to combining the traditional camera with the event camera based on the bimodal sensor.
Fig. 4 is a schematic diagram of a sampling operation process based on an image sensor according to an embodiment of the disclosure. Referring to fig. 4, a plurality of pixels (for example, r1_0, r1_1, … …, r3_n1) are arranged in an array to form a pixel array of 3×n1 (n 1 is an integer greater than or equal to 1), and the spatio-temporal signals of each pixel in the pixel array are collected under the triggering of the trigger pulse. The trigger pulse is output by taking a preset sampling period as a time step, and the time-space signals of a plurality of pixels are obtained in response to the trigger pulse.
Taking pixel R2_2 as an example, for the ith sampling period (i.gtoreq.1), its time-space signal includes a time dimension variation and a space dimension difference. Wherein the time dimension change is the light intensity G of the pixel R2_2 in the ith sampling period 2_2 (i) And its light intensity G at the i-1 th sampling period 2_2 A difference between (i-1); adjacent pixel of pixel r2_2For 8 pixels arranged around it, r1_1, r1_2, r1_3, r2_1, r2_3, r3_1, r3_2, and r3_3, respectively, the amount of spatial dimension difference includes 8, respectively, the light intensity G of the pixel r2_2 at the ith sampling period 2_2 (i) Light intensity G at the ith sampling period with pixel R1_1 1_1 (i) The difference between the light intensities G of the pixels R2_2 in the ith sampling period 2_2 (i) Light intensity G at the ith sampling period with pixel R1_2 1_2 (i) The difference between the light intensities G of the pixels R2_2 in the ith sampling period 2_2 (i) Light intensity G at the ith sampling period with pixel R1_3 1_3 (i) The difference between the light intensities G of the pixels R2_2 in the ith sampling period 2_2 (i) Light intensity G at the ith sampling period with pixel R2_1 2_1 (i) The difference between the light intensities G of the pixels R2_2 in the ith sampling period 2_2 (i) Light intensity G at the ith sampling period with pixel R2_3 2_3 (i) The difference between the light intensities G of the pixels R2_2 in the ith sampling period 2_2 (i) Light intensity G at the ith sampling period with pixel R3_1 3_1 (i) The difference between the light intensities G of the pixels R2_2 in the ith sampling period 2_2 (i) Light intensity G at the ith sampling period with pixel R3_2 3_2 (i) The difference between the light intensities G of the pixels R2_2 in the ith sampling period 2_2 (i) Light intensity G at the ith sampling period with pixel R3_3 3_3 (i) Difference between them.
Fig. 5 is a schematic diagram of a sampling operation process based on an image sensor according to an embodiment of the disclosure. Referring to fig. 5, a plurality of pixels are arranged in an array manner in a staggered manner to form a pixel array of 3×n2, and the space-time signals of each pixel in the pixel array are collected under the triggering of the trigger pulse.
Taking pixel r2_0 as an example, for the ith sampling period, its time-space signal includes a time dimension variation and a space dimension difference. Wherein the time dimension change amount is the light intensity G of the pixel R2_0 in the ith sampling period 2_0 (i) And its light intensity G at the i-1 th sampling period 2_0 A difference between (i-1); the adjacent pixels of the pixel R2_0 are 4 pixels which are arranged around the pixel R2_0, R1_1, R3_0 and R3_1 respectively, and corresponding to the pixels R1_0The spatial dimension difference amount includes 4, respectively, the light intensity G of the pixel R2_0 in the ith sampling period 2_0 (i) Light intensity G at the ith sampling period with pixel R1_0 1_0 (i) The difference between the light intensities G of the pixels R2_0 in the ith sampling period 2_0 (i) Light intensity G at the ith sampling period with pixel R1_1 1_1 (i) The difference between the light intensities G of the pixels R2_0 in the ith sampling period 2_0 (i) Light intensity G at the ith sampling period with pixel R3_0 3_0 (i) The difference between the light intensities G of the pixels R2_0 in the ith sampling period 2_0 (i) Light intensity G at the ith sampling period with pixel R3_1 3_1 (i) Difference between them.
It should be noted that the above is only an example for adjacent pixels, and in other implementations, other numbers and positions of pixels may be determined as adjacent pixels. Illustratively, for the pixel r2_2 in fig. 4, one or more of the above 8 adjacent pixels may be arbitrarily selected, and the corresponding spatial dimension difference amounts may be determined based on the selected adjacent pixels (e.g., r1_1, r1_3, r3_1, and r3_3 are selected as the adjacent pixels determining the spatial dimension difference amounts), or a larger range of pixels may be selected as the adjacent pixels (e.g., r1_0, r2_0, and r3_0 may be selected as the adjacent pixel_2 of R2 in addition to the above 8 adjacent pixels), which is not limited by the embodiments of the present disclosure.
It should also be noted that in the embodiment of the present disclosure, independent data precision may be adopted between the time dimension variation and the space dimension difference in the spatio-temporal signal, so that the data processing method not only supports sampling output of global synchronization, but also supports multiple data precision types.
In some alternative implementations, the time dimension variation has a first preset data precision and the space dimension difference has a second preset data precision; the first preset data precision corresponds to the same data precision as the second preset data precision, or the first preset data precision corresponds to different data precision from the second preset data precision. In other words, the data accuracy between the time dimension variation amount and the space dimension difference amount is independent, and the data accuracy of the two may be the same or different, which is not limited by the embodiment of the present disclosure.
It should be noted that, since the time dimension variable amount and the space dimension difference amount have independent data precision, when encoding and constructing a data packet based on both, various manners may be adopted: for example, when the data precision of the two data packets is different, the two data packets can be respectively encoded to obtain respective data packets; for example, when the data precision is the same, both may be encoded into the same data packet. Based on the independent data precision setting mode, the flexibility of coding can be improved.
In some alternative implementations, after obtaining the spatiotemporal signals of the plurality of pixels, the plurality of pixels may be grouped in step S22, resulting in a plurality of pixel groups. Wherein each pixel group comprises at least one pixel, and the number of pixels of the plurality of pixel groups can be the same or different, which is not limited by the embodiment of the present disclosure.
In some alternative implementations, grouping the plurality of pixels to obtain a plurality of pixel groups includes: dividing a pixel strip formed by a plurality of adjacent pixels in a row into a pixel group to obtain a plurality of pixel groups; and/or dividing the pixel block formed by a plurality of adjacent pixels in a plurality of rows into one pixel group to obtain a plurality of pixel groups. Where a row includes a row or a column, a row of pixels may refer to a row of pixels or a column of pixels.
In other words, in the embodiment of the present disclosure, the signal variation is similar in consideration of the pixels closer to each other, and therefore, the pixels are grouped such that the pixels closer to each other are divided into one pixel group. In grouping, a pixel bar composed of a plurality of adjacent pixels may be regarded as one pixel group in units of rows or columns; a pixel block made up of a plurality of pixels may be regarded as one pixel group in units of a plurality of rows or columns of pixels.
Illustratively, when the pixel array is grouped, for the ith row of pixels (i.ltoreq.n), the 1 st to the kth pixels may be divided into a pixel group (1 < k < m), the (k+1) th to the 2 kth pixels may be divided into a pixel group, and so on until all the pixels in the row are divided into the corresponding pixel groups (the number of pixels in the last pixel group may be less than or equal to k), thereby realizing the grouping of the pixel array, and obtaining a plurality of pixel groups.
Illustratively, the plurality of pixels form a pixel array of n×m, and h×k is taken as the size of one pixel group for the 1 st to h-th rows of pixels (h < n), so that it is divided into a plurality of pixel groups, and the above operation is repeatedly performed for the h+1-2 h-th rows of pixels until all pixels are divided into the corresponding pixel groups.
In some alternative implementations, grouping the plurality of pixels to obtain a plurality of pixel groups includes: determining the number of pixels in each pixel group according to the data precision of the space-time signal and the format requirement of the signal data packet; determining a grouping size according to the pixel data in each pixel group, wherein the grouping size is used for representing the data sizes of a plurality of pixels in the pixel group, and the grouping size corresponds to a vector form and/or a matrix form; the plurality of pixels are grouped based on the grouping size, resulting in a plurality of pixel groups.
For example, if the format of the signal data packet is required to be 24 bits (bit) for data occupation and 2 bits for data precision of space-time data, it is determined that at most 12 pixels can be set in one pixel group, based on this, any one or more of 1×12, 12×1, 2*6, 6*2, 3×4, 4*3, and the like can be determined, so that a plurality of pixels are grouped based on the determined packet size, and a plurality of pixel groups are obtained. For example, if the pixel size is determined to be 12×1, the 1 st to 12 th pixels in the 1 st row of pixels are divided into one pixel group, the 13 th to 24 th pixels are divided into one pixel group, and so on until all the pixels are divided into the corresponding pixel groups.
Fig. 6 is a schematic diagram of grouping pixels according to an embodiment of the disclosure. Referring to fig. 6, a plurality of pixels form a pixel array with m×160, and if the format requirement of the signal data packet is 24 bits of data occupation bits and the data precision of the space-time data is 2 bits, it is determined that at most 12 pixels can be set in one pixel group.
As shown in fig. 6, a pixel bar composed of adjacent 12 pixels is set as one pixel group in pixel row units. For each row of pixels, it may be divided into 14 groupings, where the 1 st to 13 th pixel groups include 12 pixels and the 14 th pixel group includes only 4 pixels.
Fig. 7 is a schematic diagram of grouping pixels according to an embodiment of the disclosure. Referring to fig. 7, a plurality of pixels form a pixel array with m×160, and if the format requirement of the signal data packet is 24 bits of data occupation bits and the data precision of the space-time data is 2 bits, it is determined that at most 12 pixels can be set in one pixel group.
As shown in fig. 7, a grouping size of 3×4 is determined, and the pixel array is divided into a plurality of pixel groups based on the grouping size. Illustratively, for the 1 st to 3 rd row pixels, the first 4 pixels of the 1 st row, the first 4 pixels of the 2 nd row, and the first 4 pixels of the 3 rd row are divided into one group, resulting in the 1 st pixel group in the 1 st to 3 rd rows; similarly, the remaining pixels are also grouped accordingly, resulting in the 2 nd to 40 th pixel groupings. The processing is performed in a similar manner for the 4 th to m th rows of pixels, thereby completing the grouping processing of the entire pixel array, resulting in a plurality of pixel groups.
It should be noted that, in some alternative implementations, a certain grouping size may be used for a partial area of the pixel array, and another grouping size may be used for other areas, or an irregular shape may be used for grouping, which is not limited by the embodiments of the present disclosure. For example, for the 160×m pixel array, the grouping manner shown in fig. 6 may be used for rows 1-3, and the grouping manner shown in fig. 7 may be used for the remaining pixel areas, so that, among the obtained pixel groups, a part of the pixel groups correspond to the pixel stripe form and a part of the pixel groups correspond to the pixel block form.
In some alternative implementations, after grouping the plurality of pixels to obtain the plurality of pixel groups, in order to facilitate encoding the addresses of the pixels subsequently, in step S23, a group address of each pixel group and a macro address of the plurality of pixel groups are determined according to the addresses of the pixels within each pixel group, where the macro addresses of the plurality of pixel groups are determined according to the row addresses of the pixel rows occupied by the plurality of pixel groups.
Taking the pixel group shown in fig. 6 as an example, if the address of the i-th row pixel row is add (i), the address of the i-th column pixel row is add (i, j), the group address of the 1 st pixel group is add (1, 1) -add (1, 12) (i.e., the group address of the 1 st to 12 th pixels (r1_0 to r1_11) in the 1 st row is indicated by the group address of the 1 st to 12 th pixels (r1_0 to r1_11)) and the group address of the 2 nd pixel group is add (1, 13) -add (1, 24) (i.e., the group address of the 13 th to 24 th pixels (r1_12 to r1_23) in the 1 st row is indicated by the group address of the 1 st to the 1 th pixel group) is indicated by the group address of the 1, and so on, the remaining group addresses of each pixel group can be determined, wherein the 14 th pixel group includes only four pixels whose group addresses are add (1, 13) -add (1, 24) to 160 th pixels (1,157) to 160 th pixels (157) in the 1 to 160 th row.
Further, since the 1 st to 14 th pixel groups occupy the first row of pixels, macro addresses of the 1 st to 14 th pixel groups are determined to be add (1) (i.e., represent addresses corresponding to the 1 st row of pixels), and similarly, macro addresses of other pixel groups can be determined.
In the related art, since pixels are not grouped, address encoding is required for a single pixel, and it is not and is not required to determine a group address of a pixel group and macro addresses of a plurality of pixel groups. In the embodiment of the present disclosure, the pixels are subjected to the grouping processing, so when encoding is performed in step S24 after the group address and the macro address are determined, the group address and the macro address can be used for the address dimension, and the address for each pixel does not need to be separately encoded, thereby reducing redundancy of address encoding.
In some alternative implementations, the signal packets include time stamp packets, address packets, and time space packets; correspondingly, encoding is performed according to a sampling period corresponding to the space-time signal, a group address of each pixel group, macro addresses of a plurality of pixel groups, and the space-time signal of the pixels in each pixel group, so as to obtain a signal data packet of the plurality of pixel groups, including: determining the time stamps of a plurality of pixel groups according to the sampling period corresponding to the space-time signal, and uniformly coding the time stamps of the plurality of pixel groups to obtain time stamp data packets of the plurality of pixel groups; encoding according to macro addresses of a plurality of pixel groups to obtain address data packets of the pixel groups; and aiming at each pixel group, encoding according to the group address of the pixel group and the space-time signal of each pixel in the pixel group to obtain a space-time data packet of each pixel group.
In the related art, each pixel adopts asynchronous sampling and output modes, so that the encoding of each pixel comprises the event time of the pixel, the address of the pixel and the corresponding event polarity (corresponding to the signal), which inevitably generates a great amount of time redundancy and address redundancy. In order to alleviate this problem, in the embodiments of the present disclosure, global unified coding is adopted for time, so that multiple pixels share one time code, and redundancy of the time code is reduced; for addresses, macro addresses and group addresses are adopted for unified coding, the address range of pixels can be determined through the macro addresses, and further the group addresses are combined, so that the addresses with smaller granularity can be refined, the use requirements can be met, and meanwhile, the redundancy of address coding in the related technology is relieved.
Fig. 8 is a schematic diagram of a signal data packet according to an embodiment of the disclosure. Referring to fig. 8, a working procedure of encoding to obtain a signal packet in the case of grouping a plurality of pixels to obtain m×14 pixel groups shown in fig. 6 is shown. The signal data packet comprises a time stamp data packet, an address data packet and a time space data packet.
As shown in fig. 8, the time stamp data packet is a time code for all pixels, which characterizes the sampling time of the spatio-temporal signals of a plurality of pixels.
For example, if the address of the i-th row pixel is add (i), and the address of the i-th row and j-th column pixels is add (i, j), the macro address field in the address packet 1 is the row address of the 1-th row pixels, that is, add (1); the value of the group address field in the space-time data packet 1_1 is add (1, 1) -add (1, 11), which represents that the space-time data packet corresponds to 1 st to 12 th pixels in the 1 st row, the space-time coding field corresponds to 1 st to 12 th pixels in the 1 st row, the value of the group address field in the space-time data packet 1_2 is add (1, 13) -add (1, 24), which represents that the space-time data packet corresponds to 13 th to 24 th pixels in the 1 st row, the space-time coding field corresponds to 13 th to 24 th pixels in the 1 st row, and so on, the value of the group address field in the space-time data packet 1_14 is add (1,157) -add (1, 160), which represents that the space-time data packet corresponds to 157 th to 160 th pixels in the 1 st row, and the space-time coding field corresponds to 157 th to 160 th pixels in the 1 st row.
The macro address field in the address data packet 2 takes the row address of the 2 nd row pixels as the value, namely add (2); the value of the group address field in the spatio-temporal data packet 2_1 is add (2, 1) -add (2, 11), which represents that it corresponds to the 1 st pixel to the 12 th pixel in the 2 nd row, the spatio-temporal coding field corresponds to the spatio-temporal signal coding of the 1 st pixel to the 12 th pixel in the 2 nd row, and other address data packets and spatio-temporal data packets are similar and will not be described again here.
In some alternative implementations, for the pixel group shown in fig. 7, the signal packets also include time stamp packets, address packets, and time space packets. The time stamp data packet is also a data packet obtained by uniformly time-encoding all pixels based on the sampling time. The 1 st pixel group to the 40 th pixel group corresponding to the 1 st-3 rd row correspond to an address data packet, and the macro address field in the data packet takes the row address of the 1 st-3 rd row pixels, namely add (1) -add (3); taking the 1 st pixel group of the 1 st to 3 rd rows as an example, the value of the group address field in the corresponding space-time data packet is add (1, 1) -add (3, 4), the space-time signal encoding of the 1 st to 4 th pixels, the 1 st to 4 th pixels and the 1 st to 4 th pixels in the 3 rd rows are represented, and the space-time encoding field corresponds to the space-time signal encoding of the 12 pixels.
In some alternative implementations, the spatio-temporal signal includes a temporal dimension variation and a spatial dimension difference, and the temporal dimension variation has a first preset data precision and the spatial dimension difference has a second preset data precision, and the spatio-temporal data packet includes a temporal dimension sub-packet and a spatial dimension sub-packet; encoding according to the group address of the pixel group and the space-time signal of each pixel in the pixel group to obtain a space-time data packet of each pixel group, including: according to the first preset data precision, coding the time dimension variation of each pixel in the pixel group to obtain a time dimension code, and coding the group address of the pixel group to obtain a first group address code; obtaining a time dimension sub-data packet of the pixel group according to the time dimension code and the first group address code; coding the space dimension difference of each pixel in the pixel group according to the second preset data precision to obtain a space dimension code, and coding the group address of the pixel group to obtain a second group address code; and obtaining the space dimension sub-data packet of the pixel group according to the space dimension code and the second group address code. By the processing mode, data packets with different data precision can be output through a unified output protocol, so that the output difficulty is simplified, and the subsequent decoding process is also simplified.
It should be noted that, in some alternative implementations, the timestamp packet may further include a frame data precision field in addition to the timestamp field, and a corresponding frame data precision identifier (e.g., the frame data precision identifier may be determined based on a sampling period); the address data packet may further include a data precision field, corresponding to the data precision identifier. Alternatively, some spare data bits may be set as spare bits in advance to be used for expansion when the expansion field is required.
In some alternative implementations, the light intensity may not change or change less in the adjacent two sampling periods of the partial pixels, so there may be partial pixel groups, where the time dimension change amounts of the respective pixels are all smaller values, and the spatial dimension difference amounts are substantially consistent with the previous sampling period. For such a case, it is not necessary to encode the part of the pixel group, thereby reducing the encoding amount and the data transmission amount.
In some alternative implementations, the pixel groups for the above case may be screened out by setting a state for the pixel groups. The preset threshold is preset, when the variation of the light intensity of each pixel in the pixel group in the adjacent sampling period is smaller than or equal to the preset threshold, the state of the pixel group is determined to be an invalid state, and when the variation of the light intensity of at least one pixel in the pixel group in the adjacent sampling period is larger than the preset threshold, the state of the pixel group is determined to be an valid state.
In some alternative implementations, the address data packet includes an address code and a valid pixel group code; the address coding is obtained by coding according to macro addresses of a plurality of pixel groups (corresponding macro address fields), the effective pixel group coding is obtained by coding according to states of all the pixel groups, the states of the pixel groups comprise an effective state and an ineffective state, the variation of the light intensity of each pixel in the pixel groups in the ineffective state in adjacent sampling periods is smaller than or equal to a preset threshold value, and the variation of the light intensity of at least one pixel in the pixel groups in the effective state in the adjacent sampling periods is larger than the preset threshold value.
Further, for the pixel group in the invalid state, the encoding amount can be reduced by not encoding the space-time signal, and the data transmission amount can be reduced by discarding the space-time data packet of the pixel group.
In addition, in some alternative implementations, it is considered that the number of pixels in which the light intensity changes between adjacent sampling periods is relatively small, so that there may be a large number of zero values or small values in the time dimension variation of the pixels, so that the time dimension variation has a certain sparsity. With respect to this feature, in some alternative implementations, the time dimension variation may be encoded by way of compression encoding, thereby further reducing the amount of memory space and data transmission occupied.
In some alternative implementations, the temporal dimensional variation of the plurality of pixels within the pixel group corresponds to a first matrix, the first matrix corresponding to a row vector form, a column vector form, or a matrix form; according to the first preset data precision, the time dimension variable quantity of each pixel in the pixel group is encoded to obtain a time dimension code, which comprises the following steps: generating a first zone bit matrix with the same size as the first matrix according to the comparison result of the time dimension variation of each pixel in the pixel group and a first preset invalid value; the method comprises the steps that elements of a first zone bit matrix have a corresponding relation with elements in the first matrix, the elements in the first zone bit matrix comprise first effective zone bits and first ineffective zone bits, the first effective zone bits represent that time dimension variation of pixels corresponding to the elements in the first matrix is not a first preset ineffective value, and the first ineffective zone bits represent that time dimension variation of pixels corresponding to the elements in the first matrix is a first preset ineffective value; generating a first compression vector according to elements of which the values in the first matrix are not the first preset invalid values; and encoding the first flag bit matrix and the first compression vector to obtain the time dimension code of the pixel group. The first preset invalid value may be a zero value or a smaller value (for example, the time dimension change amount smaller than the preset threshold value may be regarded as the first preset invalid value).
For example, the pixel group corresponds to a pixel bar formed by a plurality of adjacent pixels in a row, so the time dimension variation of the plurality of pixels in the pixel group corresponds to a row vector form, and assuming that the size of the first matrix W1 is 1×12 and w1= {12,0,0,23,4,56,0,0,16,0,0,0}, the first preset invalid value is 0, and the compression encoding process based on the first matrix is as follows:
first, according to W1, a first flag bit matrix W2 with a size of 1×12 is generated, where w2= {1,0,0,1,1,1,0,0,1,0,0,0}, elements in W2 are in one-to-one correspondence with elements in W1, when W1 (1, j) takes a value of 0 (j represents a column number of elements), W2 (1, j) corresponds to a first valid flag bit, which takes a value of 0, and when W1 (1, j) takes a value of non-zero, W2 (1, j) corresponds to a first valid flag bit, which takes a value of 1.
Next, the columns having values of 0 in W1 are deleted, and the first compression vector W1', W1' = {12,23,4,56,16}, is obtained.
Finally, the time dimension codes of the pixel groups can be obtained by coding the W1' and the W2.
Illustratively, the pixel group corresponds to a pixel block composed of a plurality of adjacent pixels in a plurality of rows, and thus, the time dimension variation amounts of the plurality of pixels in the pixel group correspond to a matrix form, assuming that the size of the first matrix iW1 is 3×4, and w1= {12,0,0,23;4,56,0,0;16,0,0,0, the first preset invalid value is 0, the compression encoding process based on the first matrix is as follows:
First, generating a first flag bit matrix W2 with a size of 3×4 according to W1, and w2= {1,0, 1;1, 0;1, 0}, the elements in W2 are in one-to-one correspondence with the elements in W1, when W1 (i, j) takes on a value of 0 (i and j respectively represent the number of rows and columns of the elements), W2 (i, j) corresponds to a first valid flag bit, which takes on a value of 0, and when W1 (i, j) takes on a non-zero value, W2 (i, j) corresponds to a first valid flag bit, which takes on a value of 1.
Next, the elements with values of 0 in W1 are deleted, and the remaining non-zero elements are arranged in order, so as to generate a first compression vector W1', W1' = {12,23,4,56,16}.
Finally, the time dimension codes of the pixel groups can be obtained by coding the W1' and the W2.
Similarly, the spatial dimension difference can be processed by adopting a compression coding mode.
In some alternative implementations, the spatial dimension difference amounts of the plurality of pixels within the pixel group correspond to a second matrix, the second matrix corresponding to a row vector form, a column vector form, or a matrix form; according to the second preset data precision, the space dimension difference amount of each pixel in the pixel group is encoded to obtain a space dimension code, which comprises the following steps: generating a second zone bit matrix with the same size as the second matrix according to the comparison result of the space dimension difference quantity of each pixel in the pixel group and a second preset invalid value; the elements in the second zone bit matrix have a corresponding relation with the elements in the second matrix, the elements in the second zone bit matrix comprise second effective zone bits and second ineffective zone bits, the second effective zone bits represent that the space dimension difference quantity of pixels corresponding to the elements in the second matrix is not a second preset ineffective value, and the second ineffective zone bits represent that the space dimension difference quantity of pixels corresponding to the elements in the second matrix is a second preset ineffective value; generating a second compression vector according to the element of which the value in the second matrix is not the second preset invalid value; and encoding the second flag bit matrix and the second compression vector to obtain the space dimension code of the pixel group. The detailed encoding process can be referred to as a process of time dimension change, and will not be described herein.
It should be noted that, in some alternative implementations, after the signal packet is obtained, the signal packet may also be output to the outside, where the processor/processing core that receives the signal packet performs corresponding processing (e.g., decoding the signal packet, imaging processing based on decoded data, etc., which is not limited by the embodiments of the present disclosure).
In summary, in the embodiment of the present disclosure, firstly, a synchronous sampling and outputting manner is adopted, so that a problem of low compatibility caused by an asynchronous processing manner is avoided, and the method is compatible with a space-time signal processing manner of a bimodal image sensor; secondly, by grouping pixels, the redundancy degree of address data can be reduced; in addition, the sparsity of the data is fully utilized, the space-time signal is compressed by adopting a compression coding mode, the data volume is further compressed, invalid information caused by a large number of zero values can be reduced, and the bandwidth requirement is reduced.
It will be appreciated that the above-mentioned method embodiments of the present disclosure may be combined with each other to form a combined embodiment without departing from the principle logic, and are limited to the description of the present disclosure. It will be appreciated by those skilled in the art that in the above-described methods of the embodiments, the particular order of execution of the steps should be determined by their function and possible inherent logic.
In a second aspect, embodiments of the present disclosure provide a data processing apparatus based on an image sensor.
Fig. 9 is a block diagram of a data processing apparatus based on an image sensor according to an embodiment of the present disclosure.
Referring to fig. 9, an embodiment of the present disclosure provides an image sensor-based data processing apparatus 900 including:
an acquisition module 910 is configured to acquire, by using the image sensor, spatiotemporal signals of a plurality of pixels.
The grouping module 920 is configured to group the plurality of pixels to obtain a plurality of pixel groups.
A determining module 930 is configured to determine a group address of each pixel group and macro addresses of the plurality of pixel groups according to the pixel addresses within each pixel group.
The encoding module 940 is configured to encode the signal data packet of the plurality of pixel groups according to a sampling period corresponding to the spatio-temporal signal, a group address of each pixel group, a macro address of the plurality of pixel groups, and the spatio-temporal signal of the pixel in each pixel group.
According to the embodiment provided by the disclosure, firstly, the space-time signals of a plurality of pixels are acquired through the acquisition module and the image sensor, and through the space-time signals, not only can the signal change condition of the pixels in the time dimension be determined, but also the signal difference condition between the current pixel and the adjacent pixels can be determined, so that the signal information of the pixels can be reflected more comprehensively; secondly, grouping a plurality of pixels by a grouping module to obtain a plurality of pixel groups, fully considering the locality of the spatial-temporal signal change of the pixels, and dividing a plurality of pixels with similar change into one pixel group by grouping, so that the extra redundancy brought by the address information of each change event coding is reduced in the subsequent coding; in addition, a determining module is utilized to determine the group address of each pixel group and the macro addresses of a plurality of pixel groups according to the pixel addresses in each pixel group, and when the subsequent encoding is performed, compared with the encoding by adopting each pixel address, the encoding is performed by using the macro address and the group address with larger granularity, so that the redundancy of address encoding can be reduced; finally, the encoding module encodes according to the sampling period corresponding to the space-time signal, the group address of each pixel group, the macro addresses of the pixel groups and the space-time signal of the pixels in each pixel group to obtain signal data packets of the pixel groups, so that the encoding time redundancy and address redundancy are reduced, and meanwhile, the signal information of each pixel is reserved, so that corresponding images can be recovered according to the signal data packets, and various tasks can be executed based on the images.
In a third aspect, embodiments of the present disclosure provide an image processing system.
Fig. 10 is a block diagram of an image processing system according to an embodiment of the present disclosure.
Referring to fig. 10, an embodiment of the present disclosure provides an image processing system 1000 including: an image sensor based data processing device 1010 and at least one image sensor 1020; wherein:
the image sensor 1020 is configured to acquire spatiotemporal signals of a plurality of pixels based on a preset sampling period;
an image sensor-based data processing apparatus 1010 for performing the image sensor-based data processing method of any of the embodiments of the present disclosure.
According to the embodiment provided by the disclosure, the space-time signals of the pixels are acquired and output through the image sensor, the pixels are grouped, the time of the space-time signals is globally coded in coding, and the addresses are uniformly coded based on the macro address and the group address of the pixel group, so that the redundancy of the time coding and the address coding can be relieved, meanwhile, the signal information of each pixel is reserved, and corresponding images can be recovered according to the signal data packet, so that various tasks can be executed based on the images.
In addition, the disclosure further provides an electronic device and a computer readable storage medium, and the foregoing may be used to implement any one of the image sensor-based data processing methods provided in the disclosure, and the corresponding technical schemes and descriptions and corresponding descriptions referring to the method parts are not repeated.
Fig. 11 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Referring to fig. 11, an embodiment of the present disclosure provides an electronic device including: at least one processor 1101; at least one memory 1102, and one or more I/O interfaces 1103 connected between the processor 1101 and the memory 1102; the memory 1102 stores one or more computer programs executable by the at least one processor 1101, and the one or more computer programs are executed by the at least one processor 1101 to enable the at least one processor 1101 to perform the image sensor-based data processing method described above.
Fig. 12 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Referring to fig. 12, an embodiment of the present disclosure provides an electronic device including a plurality of processing cores 1201 and a network-on-chip 1202, wherein the plurality of processing cores 1201 are each connected to the network-on-chip 1202, and the network-on-chip 1202 is configured to interact data between the plurality of processing cores and external data.
Wherein one or more processing cores 1201 have one or more instructions stored therein, the one or more instructions being executed by the one or more processing cores 1201 to enable the one or more processing cores 1201 to perform the image sensor based data processing method described above.
In some embodiments, the electronic device may be a brain-like chip, and since the brain-like chip may employ a vectorization computing manner, parameters such as weight information of a neural network model need to be called into through an external memory, for example, double Data Rate (DDR) synchronous dynamic random access memory. Therefore, the operation efficiency of batch processing is high in the embodiment of the disclosure.
The disclosed embodiments also provide a computer readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor/processing core, implements the above-described image sensor-based data processing method. The computer readable storage medium may be a volatile or nonvolatile computer readable storage medium.
Embodiments of the present disclosure also provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when executed in a processor of an electronic device, performs the above-described image sensor-based data processing method.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, functional modules/units in the apparatus, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed cooperatively by several physical components. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer-readable storage media, which may include computer storage media (or non-transitory media) and communication media (or transitory media).
The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable program instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, random Access Memory (RAM), read Only Memory (ROM), erasable Programmable Read Only Memory (EPROM), static Random Access Memory (SRAM), flash memory or other memory technology, portable compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical disc storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable program instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and may include any information delivery media.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
The computer program product described herein may be embodied in hardware, software, or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Example embodiments have been disclosed herein, and although specific terms are employed, they are used and should be interpreted in a generic and descriptive sense only and not for purpose of limitation. In some instances, it will be apparent to one skilled in the art that features, characteristics, and/or elements described in connection with a particular embodiment may be used alone or in combination with other embodiments unless explicitly stated otherwise. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the disclosure as set forth in the appended claims.
Claims (17)
1. A data processing method based on an image sensor, comprising:
acquiring space-time signals of a plurality of pixels through an image sensor;
grouping the pixels to obtain a plurality of pixel groups;
determining the group address of each pixel group and the macro addresses of a plurality of pixel groups according to the pixel addresses in each pixel group;
and encoding according to the sampling period corresponding to the space-time signal, the group address of each pixel group, the macro addresses of a plurality of pixel groups and the space-time signal of the pixels in each pixel group to obtain signal data packets of a plurality of pixel groups.
2. The method of claim 1, wherein the acquiring, by the image sensor, the spatio-temporal signals of the plurality of pixels comprises:
acquiring space-time signals of a plurality of pixels through the image sensor based on a preset sampling period; the spatio-temporal signals are used to characterize at least signal variation information of the pixels in the time dimension and signal difference information in the spatial dimension.
3. The method of claim 2, wherein the acquiring, by the image sensor, the spatiotemporal signals of the plurality of pixels based on the preset sampling period comprises:
sampling output data of the image sensor according to the sampling period to obtain space-time signals of a plurality of pixels corresponding to the image sensor;
or,
and under the condition that the image sensor outputs data outwards based on the sampling period, obtaining space-time signals of a plurality of corresponding pixels according to the output data of the image sensor.
4. The method of claim 1, wherein the spatio-temporal signal is a light intensity class signal including at least a temporal dimension variation amount and a spatial dimension difference amount; the image sensor is also used for acquiring color signals of a plurality of pixels;
The time dimension variation is the variation between the light intensity of the pixel in the current sampling period and the light intensity of the pixel in the last sampling period, and the space dimension difference is the difference between the light intensity of the pixel in the current sampling period and the light intensity of at least one adjacent pixel in the current sampling period.
5. The method of claim 4, wherein the time dimension delta has a first preset data precision and the space dimension delta has a second preset data precision;
the first preset data precision corresponds to the same data precision as the second preset data precision, or the first preset data precision corresponds to different data precision from the second preset data precision.
6. The method of claim 1, wherein grouping the plurality of pixels to obtain a plurality of pixel groups comprises:
dividing a pixel strip formed by a plurality of adjacent pixels in a row into a pixel group to obtain a plurality of pixel groups;
and/or the number of the groups of groups,
and dividing a pixel block formed by a plurality of adjacent pixels in a plurality of rows into a pixel group to obtain a plurality of pixel groups.
7. The method of claim 1, wherein grouping the plurality of pixels to obtain a plurality of pixel groups comprises:
determining the number of pixels in each pixel group according to the data precision of the space-time signal and the format requirement of the signal data packet;
determining a grouping size according to pixel data in each pixel group, wherein the grouping size is used for representing the data sizes of a plurality of pixels in the pixel group, and the grouping size corresponds to a vector form and/or a matrix form;
and grouping a plurality of pixels based on the grouping size to obtain a plurality of pixel groups.
8. The method of claim 1, wherein the signal packets comprise time stamp packets, address packets, and time space packets;
the encoding is performed according to the sampling period corresponding to the spatio-temporal signal, the group address of each pixel group, the macro addresses of a plurality of pixel groups, and the spatio-temporal signal of the pixels in each pixel group, so as to obtain signal data packets of a plurality of pixel groups, including:
determining the time stamps of a plurality of pixel groups according to the sampling period corresponding to the space-time signal, and uniformly encoding the time stamps of the pixel groups to obtain time stamp data packets of the pixel groups;
Encoding according to macro addresses of a plurality of pixel groups to obtain address data packets of the pixel groups;
and aiming at each pixel group, encoding according to the group address of the pixel group and the space-time signal of each pixel in the pixel group to obtain a space-time data packet of each pixel group.
9. The method of claim 8, wherein the address data packet comprises an address code and a valid pixel group code;
the address coding is obtained by coding according to macro addresses of a plurality of pixel groups, the effective pixel group coding is obtained by coding according to states of the pixel groups, the states of the pixel groups comprise an effective state and an ineffective state, the variation of the light intensity of each pixel in the pixel groups in the ineffective state in adjacent sampling periods is smaller than or equal to a preset threshold, and the variation of the light intensity of at least one pixel in the pixel groups in the effective state in the adjacent sampling periods is larger than the preset threshold.
10. The method of claim 9, wherein said encoding based on the group address of said group of pixels and the spatio-temporal signal of each pixel within said group of pixels, after obtaining the spatio-temporal data packet of each said group of pixels, further comprises:
And discarding the space-time data packet of the pixel group when the state of the pixel group is an invalid state.
11. The method of claim 8, wherein the spatio-temporal signal includes a temporal dimension variation and a spatial dimension difference, and the temporal dimension variation has a first preset data precision and the spatial dimension difference has a second preset data precision, and the spatio-temporal data packet includes a temporal dimension sub-packet and a spatial dimension sub-packet;
the encoding according to the group address of the pixel group and the space-time signal of each pixel in the pixel group, to obtain the space-time data packet of each pixel group, including:
coding the time dimension variation of each pixel in the pixel group according to the first preset data precision to obtain a time dimension code, and coding the group address of the pixel group to obtain a first group address code;
obtaining a time dimension sub-data packet of the pixel group according to the time dimension code and the first group address code;
coding the space dimension difference of each pixel in the pixel group according to the second preset data precision to obtain a space dimension code, and coding the group address of the pixel group to obtain a second group address code;
And obtaining the space dimension sub-data packet of the pixel group according to the space dimension code and the second group address code.
12. The method of claim 11, wherein the temporal dimensional variation of the plurality of pixels within the group of pixels corresponds to a first matrix, the first matrix corresponding to a row vector form, a column vector form, or a matrix form;
the step of encoding the time dimension variation of each pixel in the pixel group according to the first preset data precision to obtain a time dimension code includes:
generating a first zone bit matrix with the same size as the first matrix according to the comparison result of the time dimension variation of each pixel in the pixel group and a first preset invalid value;
the elements of the first zone bit matrix have a corresponding relation with the elements in the first matrix, the elements in the first zone bit matrix comprise first effective zone bits and first ineffective zone bits, the first effective zone bits represent that the time dimension variable quantity of pixels corresponding to the elements in the first matrix is not the first preset ineffective value, and the first ineffective zone bits represent that the time dimension variable quantity of pixels corresponding to the elements in the first matrix is the first preset ineffective value;
Generating a first compression vector according to elements of which the values in the first matrix are not the first preset invalid values;
and encoding the first zone bit matrix and the first compression vector to obtain the time dimension code of the pixel group.
13. The method of claim 11, wherein the amount of spatial dimension difference for a plurality of pixels within the group of pixels corresponds to a second matrix, the second matrix corresponding to a row vector form, a column vector form, or a matrix form;
the step of encoding the spatial dimension difference of each pixel in the pixel group according to the second preset data precision to obtain a spatial dimension code, including:
generating a second zone bit matrix with the same size as the second matrix according to the comparison result of the space dimension difference quantity of each pixel in the pixel group and a second preset invalid value;
the elements of the second zone bit matrix have a corresponding relation with the elements in the second matrix, the elements in the second zone bit matrix comprise second effective zone bits and second ineffective zone bits, the second effective zone bits represent that the space dimension difference quantity of pixels corresponding to the elements in the second matrix is not the second preset ineffective value, and the second ineffective zone bits represent that the space dimension difference quantity of pixels corresponding to the elements in the second matrix is the second preset ineffective value;
Generating a second compression vector according to the element of which the value in the second matrix is not the second preset invalid value;
and coding the second zone bit matrix and the second compression vector to obtain the space dimension code of the pixel group.
14. A data processing apparatus based on an image sensor, comprising:
an acquisition module for acquiring spatiotemporal signals of a plurality of pixels by an image sensor;
the grouping module is used for grouping the pixels to obtain a plurality of pixel groups;
the determining module is used for determining the group address of each pixel group and the macro addresses of a plurality of pixel groups according to the pixel addresses in each pixel group;
the coding module is used for coding according to the sampling period corresponding to the space-time signal, the group address of each pixel group, the macro addresses of a plurality of pixel groups and the space-time signal of the pixels in each pixel group to obtain signal data packets of a plurality of pixel groups.
15. An image processing system comprising image sensor-based data processing means and at least one image sensor; wherein,,
the image sensor is used for acquiring space-time signals of a plurality of pixels based on a preset sampling period;
The image sensor-based data processing apparatus for performing the image sensor-based data processing method as claimed in any one of claims 1 to 13.
16. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores one or more computer programs executable by the at least one processor to enable the at least one processor to perform the image sensor-based data processing method of any one of claims 1-13.
17. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the image sensor-based data processing method according to any one of claims 1-13.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310085641.0A CN116193129A (en) | 2023-01-19 | 2023-01-19 | Data processing method and device based on image sensor and image processing system |
PCT/CN2023/138718 WO2024152811A1 (en) | 2023-01-19 | 2023-12-14 | Image-sensor-based data processing method and apparatus, and image processing system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310085641.0A CN116193129A (en) | 2023-01-19 | 2023-01-19 | Data processing method and device based on image sensor and image processing system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116193129A true CN116193129A (en) | 2023-05-30 |
Family
ID=86441758
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310085641.0A Pending CN116193129A (en) | 2023-01-19 | 2023-01-19 | Data processing method and device based on image sensor and image processing system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN116193129A (en) |
WO (1) | WO2024152811A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024152811A1 (en) * | 2023-01-19 | 2024-07-25 | 北京灵汐科技有限公司 | Image-sensor-based data processing method and apparatus, and image processing system |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113495377B (en) * | 2020-04-08 | 2022-08-26 | 华为技术有限公司 | Silicon-based liquid crystal loading device, silicon-based liquid crystal device and silicon-based liquid crystal modulation method |
JP2023085573A (en) * | 2020-04-10 | 2023-06-21 | ソニーセミコンダクタソリューションズ株式会社 | Imaging apparatus and imaging method |
CN111510650B (en) * | 2020-04-26 | 2021-06-04 | 豪威芯仑传感器(上海)有限公司 | an image sensor |
CN112505661B (en) * | 2020-11-23 | 2024-09-17 | Oppo(重庆)智能科技有限公司 | Pixel control method, pixel module, device, terminal and storage medium |
CN116193129A (en) * | 2023-01-19 | 2023-05-30 | 北京灵汐科技有限公司 | Data processing method and device based on image sensor and image processing system |
-
2023
- 2023-01-19 CN CN202310085641.0A patent/CN116193129A/en active Pending
- 2023-12-14 WO PCT/CN2023/138718 patent/WO2024152811A1/en unknown
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024152811A1 (en) * | 2023-01-19 | 2024-07-25 | 北京灵汐科技有限公司 | Image-sensor-based data processing method and apparatus, and image processing system |
Also Published As
Publication number | Publication date |
---|---|
WO2024152811A1 (en) | 2024-07-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7437485B2 (en) | Image data encoding method and device, and image data decoding method and device | |
Bi et al. | Spike coding for dynamic vision sensors | |
US10462476B1 (en) | Devices for compression/decompression, system, chip, and electronic device | |
US20220217293A1 (en) | Apparatus for encoding image, apparatus for decoding image and image sensor | |
US6831684B1 (en) | Circuit and method for pixel rearrangement in a digital pixel sensor readout | |
EP4300958A1 (en) | Video image encoding method, video image decoding method and related devices | |
US11871156B2 (en) | Dynamic vision filtering for event detection | |
KR20200011000A (en) | Device and method for augmented reality preview and positional tracking | |
EP4231644A1 (en) | Video frame compression method and apparatus, and video frame decompression method and apparatus | |
WO2022188120A1 (en) | Event-based vision sensor and method of event filtering | |
WO2017205597A1 (en) | Image signal processing-based encoding hints for motion estimation | |
CN116193129A (en) | Data processing method and device based on image sensor and image processing system | |
CN117994149A (en) | Image reconstruction method and pulse camera | |
US20220058774A1 (en) | Systems and Methods for Performing Image Enhancement using Neural Networks Implemented by Channel-Constrained Hardware Accelerators | |
US12067696B2 (en) | Image sensors with variable resolution image format | |
Freeman et al. | An asynchronous intensity representation for framed and event video sources | |
CN109379590A (en) | A pulse sequence compression method and system | |
WO2025038454A1 (en) | Base graph based mesh compression | |
CN109474825B (en) | A pulse sequence compression method and system | |
CN116934647A (en) | Compressed light field quality enhancement method based on spatial angle deformable convolution network | |
JP7574521B2 (en) | Method and apparatus for hierarchical audio/video or image compression - Patents.com | |
Aurangzeb et al. | Analysis of binary image coding methods for outdoor applications of wireless vision sensor networks | |
Tabus et al. | Lossless compression of event data and optical flow images from event cameras | |
Cao et al. | Entropy modeling via Gaussian process regression for learned image compression | |
US9264707B2 (en) | Multi-symbol run-length coding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |