CN117643047A - Camera module, image processing method and device, terminal, electronic equipment and medium - Google Patents
Camera module, image processing method and device, terminal, electronic equipment and medium Download PDFInfo
- Publication number
- CN117643047A CN117643047A CN202280004460.4A CN202280004460A CN117643047A CN 117643047 A CN117643047 A CN 117643047A CN 202280004460 A CN202280004460 A CN 202280004460A CN 117643047 A CN117643047 A CN 117643047A
- Authority
- CN
- China
- Prior art keywords
- image
- pixel
- lens
- pixels
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 39
- 238000003384 imaging method Methods 0.000 claims abstract description 130
- 238000012545 processing Methods 0.000 claims abstract description 114
- 238000000034 method Methods 0.000 claims abstract description 25
- 230000003287 optical effect Effects 0.000 claims description 29
- 238000005070 sampling Methods 0.000 claims description 28
- 238000010586 diagram Methods 0.000 claims description 27
- 230000008569 process Effects 0.000 claims description 10
- 230000000694 effects Effects 0.000 abstract description 19
- 238000004590 computer program Methods 0.000 description 13
- 238000013135 deep learning Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 8
- 238000000605 extraction Methods 0.000 description 5
- 230000004927 fusion Effects 0.000 description 5
- 101100248200 Arabidopsis thaliana RGGB gene Proteins 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 238000013499 data model Methods 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000008521 reorganization Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 238000005245 sintering Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The disclosure provides a camera module, an image processing method, an image processing device, a terminal, an electronic device and a medium, wherein the camera module comprises an image sensor and at least two lenses, and the method comprises the following steps: the method comprises the steps of obtaining an original image, wherein the original image comprises a plurality of area images, each area image corresponds to different imaging areas of an image sensor, central pixels of the different imaging areas are not identical, the central pixels are pixels corresponding to the field of view center of a lens in pixels of the imaging areas, and a target image to be output is generated based on the plurality of area images. When the image processing is carried out on the area images captured by different imaging areas of the image sensor to generate the target image to be output, the image details can be effectively reserved, the image processing quality is ensured, and the image processing effect is improved.
Description
The disclosure relates to the technical field of electronic equipment, and in particular relates to a camera module, an image processing method, an image processing device, a terminal, electronic equipment and a medium.
With the development of image capturing technology, the demand for the definition of an image captured by an electronic device is increasing.
In the related art, a camera module is configured in an electronic device, a lens is arranged on an image sensor in the camera module to perform imaging, and then an imaging image of the lens is enhanced by an image signal processor (Image Signal Processor, ISP) to improve the image quality. As the size of the image sensor is changed, the image plane may be increased accordingly, and in order to compensate for the loss of the photosensitive pixels, the height of the lens is generally increased to adapt to the size change of the image sensor.
In this way, the height of the lens to be laid out is easily affected by the size of the image sensor, the layout of the lens of the image sensor is not flexible enough, the image processing effect cannot be considered, and the effective balance of the layout of the lens and the image processing effect cannot be realized.
Disclosure of Invention
The embodiment of the disclosure provides a camera module, an image processing method, a device, a terminal, electronic equipment and a medium, which can be applied to the technical field of electronic equipment, and can effectively reserve image details, ensure image generation quality and improve image processing effect when image processing is performed on area images captured by different imaging areas of an image sensor to generate a target image to be output.
In a first aspect, embodiments of the present disclosure provide a camera module including an image sensor and at least two lenses;
each lens corresponds to different imaging areas of the image sensor, and central pixels of the different imaging areas are not identical;
the center pixel is a pixel corresponding to the center of a field of view of a lens corresponding to the imaging region;
in some embodiments of the present disclosure, the type of center pixel of the different imaging regions is equal to the type of pixel in the image sensor.
In some embodiments of the present disclosure, for a first lens and a second lens adjacent in any one first direction of at least two lenses, a first center pixel corresponding to the first lens imaging region is different from a second center pixel corresponding to the second lens imaging region, or an adjacent pixel arrangement adjacent to the first center pixel is different from an adjacent pixel arrangement adjacent to the second center pixel.
In some embodiments of the present disclosure, for any adjacent and common side first and second imaging regions, a distance between a center of field of view of a third lens corresponding to the first imaging region and a center of field of view of a fourth lens corresponding to the second imaging region is less than or equal to a sum of a half-side length of the third lens and a half-side length of the fourth lens;
The half side length of the third lens is half of the distance of the farthest point in the orthographic image formed by the third lens, and the half side length of the fourth lens is half of the distance of the farthest point in the orthographic image formed by the fourth lens.
In a second aspect, an embodiment of the present disclosure provides an image processing method, including:
acquiring an original image, wherein the original image comprises a plurality of area images, each area image corresponds to a different imaging area of an image sensor, the central pixels of the different imaging areas are not completely identical, and the central pixels are pixels corresponding to the field of view center of a lens in the pixels of the imaging areas;
a target image to be output is generated based on the plurality of area images.
In some embodiments of the present disclosure, generating a target image to be output based on a plurality of region images includes:
carrying out alignment treatment on the plurality of area images to obtain an alignment image; the alignment processing is used for enabling the pixel types at the corresponding positions of the image positions of each region to be the same;
a target image is obtained based on the alignment image.
In some embodiments of the present disclosure, performing an alignment process on a plurality of area images to obtain an aligned image includes:
clipping the plurality of region images, and for a third region image and a fourth region image that are adjacent in any second direction and share sides, a first center pixel within a clipping region of the third region image has a pixel offset relative to a second center pixel within a clipping region of the fourth region image;
Carrying out alignment treatment on the image obtained after cutting to obtain an alignment image;
the pixel shift is such that, in the clipping region of the third region image, the pixel corresponding to the second center pixel position is located at a position shifted from the first center pixel to the same type of pixel as the second center pixel in the second direction.
In some embodiments of the present disclosure, performing an alignment process on the image obtained after clipping to obtain an aligned image includes:
generating an optical flow characteristic diagram corresponding to the image obtained after cutting;
performing up-sampling processing on the light flow characteristic diagram to obtain an up-sampling image;
and carrying out alignment processing on the up-sampling image to obtain an alignment image.
In some embodiments of the present disclosure, within the cropped region of each region image, a center pixel of the region image is located at a center position of the cropped region of the region image, or the center pixel of the region image is closer to the region center position than a non-center pixel of the region image;
wherein the non-center pixel of the area image is a pixel other than the center pixel in the area image.
In some embodiments of the present disclosure, deriving the target image based on the alignment image includes:
Extracting a plurality of pixels of the same kind from the alignment image, wherein the pixels have corresponding pixel positions in the alignment image;
combining a plurality of pixels according to the pixel positions to obtain an image to be fused;
and fusing the multi-frame images to be fused to obtain the target image.
In some embodiments of the present disclosure, combining a plurality of pixels according to pixel positions to obtain an image to be fused includes:
combining a plurality of pixels according to the pixel positions to obtain a combined image;
determining image semantic features corresponding to the combined image;
and carrying out up-sampling processing on the combined image according to the semantic features of the image so as to obtain an image to be fused.
In a third aspect, an embodiment of the present disclosure provides an image processing apparatus including:
the acquisition module is used for acquiring an original image, wherein the original image comprises a plurality of area images, each area image corresponds to a different imaging area of the image sensor, the central pixels of the different imaging areas are not identical, and the central pixels are pixels corresponding to the field of view center of the lens in the pixels of the imaging area;
and the generation module is used for generating a target image to be output based on the plurality of area images.
In some embodiments of the present disclosure, the generating module includes:
The first processing sub-module is used for carrying out alignment processing on the plurality of area images to obtain an alignment image; the alignment processing is used for enabling the pixel types at the corresponding positions of the image positions of each region to be the same;
and the second processing sub-module is used for obtaining a target image based on the aligned image.
In some embodiments of the present disclosure, the first processing sub-module is specifically configured to:
clipping the plurality of region images, and for a third region image and a fourth region image that are adjacent in any second direction and share sides, a first center pixel within a clipping region of the third region image has a pixel offset relative to a second center pixel within a clipping region of the fourth region image;
carrying out alignment treatment on the image obtained after cutting to obtain an alignment image;
the pixel shift is such that, in the clipping region of the third region image, the pixel corresponding to the second center pixel position is located at a position shifted from the first center pixel to the same type of pixel as the second center pixel in the second direction.
In some embodiments of the present disclosure, the first processing sub-module is specifically configured to:
generating an optical flow characteristic diagram corresponding to the image obtained after cutting;
Performing up-sampling processing on the light flow characteristic diagram to obtain an up-sampling image;
and carrying out alignment processing on the up-sampling image to obtain an alignment image.
In some embodiments of the present disclosure, within the cropped region of each region image, a center pixel of the region image is located at a center position of the cropped region of the region image, or the center pixel of the region image is closer to the region center position than a non-center pixel of the region image;
wherein the non-center pixel of the area image is a pixel other than the center pixel in the area image.
In some embodiments of the present disclosure, the second processing sub-module is specifically configured to:
extracting a plurality of pixels of the same kind from the alignment image, wherein the pixels have corresponding pixel positions in the alignment image;
combining a plurality of pixels according to the pixel positions to obtain an image to be fused;
and fusing the multi-frame images to be fused to obtain the target image.
In some embodiments of the present disclosure, the second processing sub-module is specifically configured to:
combining a plurality of pixels according to the pixel positions to obtain a combined image;
determining image semantic features corresponding to the combined image;
and carrying out up-sampling processing on the combined image according to the semantic features of the image so as to obtain an image to be fused.
In a fourth aspect, an embodiment of the present disclosure provides a terminal, including: the embodiment of the first aspect provides a camera module.
In a fifth aspect, embodiments of the present disclosure provide an electronic device, including:
a camera module;
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image processing method set forth in the foregoing second aspect embodiment.
In a sixth aspect, embodiments of the present disclosure provide a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the image processing method set forth in the embodiments of the second aspect of the present disclosure.
In a seventh aspect, embodiments of the present disclosure provide a computer program product comprising a computer program which, when executed by a processor, implements the image processing method proposed by the embodiments of the second aspect of the present disclosure.
In summary, the camera module, the image processing method, the apparatus, the terminal, the electronic device, the storage medium, the computer program, and the computer program product provided in the embodiments of the present disclosure may achieve the following technical effects:
The original image is obtained, wherein the original image comprises a plurality of area images, each area image corresponds to different imaging areas of the image sensor, central pixels of the different imaging areas are not identical, the central pixels are pixels corresponding to the field of view center of the lens in pixels of the imaging areas, the target image to be output is processed based on the plurality of area images, and when the area images captured by the different imaging areas of the image sensor are processed to generate the target image to be output, image details can be effectively reserved, image generation quality is guaranteed, and image processing effect is improved.
In order to more clearly illustrate the technical solutions in the embodiments or the background of the present disclosure, the following description will explain the drawings that are required to be used in the embodiments or the background of the present disclosure.
Fig. 1 is a schematic structural diagram of a camera module according to an embodiment of the disclosure;
FIG. 2 is a schematic diagram of a lens layout in an embodiment of the disclosure;
FIG. 3 is a schematic view of a region image in the present disclosure;
FIG. 4 is a flow chart of an image processing method according to an embodiment of the disclosure;
FIG. 5 is a schematic view of object image generation according to another embodiment of the present disclosure;
FIG. 6 is a flow chart of an image processing method according to another embodiment of the present disclosure;
fig. 7 is a flowchart of an image processing method according to another embodiment of the present disclosure;
FIG. 8 is a schematic diagram of pixel shifting according to another embodiment of the disclosure;
fig. 9 is a flowchart of an image processing method according to another embodiment of the present disclosure;
FIG. 10 is a schematic diagram of image alignment in an embodiment of the present disclosure;
FIG. 11 is a schematic diagram of an image fusion approach in an embodiment of the present disclosure;
FIG. 12 is a comparison of pixel shift processing results according to another embodiment of the present disclosure;
fig. 13 is a schematic structural view of an image processing apparatus according to an embodiment of the present disclosure;
fig. 14 is a schematic structural view of an image processing apparatus according to another embodiment of the present disclosure;
fig. 15 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure;
fig. 16 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present disclosure.
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with the embodiments of the present disclosure. Rather, they are merely examples of apparatus and methods consistent with aspects of embodiments of the present disclosure as detailed in the accompanying claims.
The terminology used in the embodiments of the disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments of the disclosure. As used in this disclosure of embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present disclosure to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of embodiments of the present disclosure. The words "if" and "if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination", depending on the context.
For ease of understanding, the terms referred to in this disclosure are first introduced.
1. Imaging region
The imaging area is an area in the image sensor for recognizing visible light captured by the lens and imaging.
2. Center pixel
The center pixel is a pixel corresponding to the center of the field of view of the lens corresponding to the imaging region among the pixels of the imaging region.
Fig. 1 is a schematic structural diagram of a camera module according to an embodiment of the disclosure.
The camera module 10 includes an image sensor 101 and at least two lenses 102, wherein each lens 102 corresponds to a different imaging area of the image sensor 101, and center pixels of the different imaging areas are not identical.
That is, the embodiment of the present disclosure supports arranging a plurality of lenses 102 in one image sensor 101, the number of lenses 102 being at least two, which is not limited.
In the embodiment of the present disclosure, the arrangement manner of the plurality of lenses 102 corresponding to the image sensor 101 may be preset, and the plurality of lenses 102 may be arranged in the image sensor 101 according to the predetermined arrangement manner of the lenses, so that each lens 102 corresponds to a different imaging area of the image sensor 101.
For example, the arrangement mode may be a special-shaped cutting arrangement mode, wherein the special-shaped cutting is a mode of cutting the lenses, and by performing special-shaped cutting on the lenses, the space between partial lenses in the lens array can be effectively shortened, so that the plurality of lenses 102 can be arranged as close as possible, and of course, the corresponding arrangement mode can be configured according to the size and shape of the image sensor 101, which is not limited.
In the embodiment of the disclosure, as shown in fig. 2, fig. 2 is a schematic diagram of a lens layout manner in the embodiment of the disclosure, in which a plurality of lenses 102 may be laid out for one image sensor 101, and in fig. 2, four lenses 102 are laid out for the image sensor 101 by way of example, without limitation.
The imaging area is an area of the image sensor 101 for identifying and imaging the information of the ambient light captured by the lens 102, and as shown in the lens layout of fig. 2, the area of the image sensor 101 corresponding to the lens 102 may be used as the imaging area, which is not limited.
The information of the ambient light may specifically be, for example, light intensity and wavelength information of the ambient light, and the like, which is not limited.
Among the pixels in the imaging area, the center pixel is a pixel corresponding to the center of the field of view of the lens 102 corresponding to the imaging area, as shown in fig. 2, the pixel in the imaging area corresponding to the center position of the lens 102 (the center position may be the center of the field of view of the lens 102) may be taken as the center pixel, and of course, the center pixel may be flexibly defined according to the actual shooting requirement, which is not limited.
The Field Of View is a Field Of View (FOV) Of the camera module 10 where the lens 102 can sense ambient light, and the center Of the Field Of View is a center point Of the FOV Of the camera module that the lens 102 can sense, which may be used as a center Of the Field Of View corresponding to the lens by using a center point Of an area image captured by the single lens 102 in the embodiment Of the present disclosure, where the center Of the Field Of View has a center pixel corresponding to the center pixel in an imaging area, which is not limited.
Wherein the image formed by the lens 102 based on the information of the ambient light captured by the corresponding imaging pixels may be referred to as a regional image, as shown in fig. 3, fig. 3 is a regional image schematic diagram in the present disclosure.
The area image may specifically be, for example, an image sensor of an electronic device, which is not limited to the RAW format image acquired by the corresponding imaging area and not subjected to any processing.
Wherein, the RAW format image, i.e. the area image in which the image sensor converts the light source signal captured based on the corresponding imaging area into a digital signal. The RAW format image records RAW information of a digital camera sensor, and also records some metadata generated by camera shooting, such as setting of sensitivity, shutter speed, aperture value, white balance, and the like.
In the embodiment of the disclosure, the center of a field of view corresponding to the lens can be determined according to the midpoint position of the area image, and the center pixel in the imaging area of the corresponding image sensor can be determined according to the center of the field of view.
In the embodiment of the disclosure, since the number of the lenses 102 is plural, different central pixels may be set for different lenses respectively, and the central pixels of different imaging areas are not identical, which is not limited.
The types of the center pixels of the different imaging areas may be different, that is, the center pixels of the different imaging areas have corresponding types, and the types of the center pixels may be different, which is not limited.
In some embodiments of the present disclosure, the type of center pixel of the different imaging regions is equal to the type of pixel in the image sensor.
The types of pixels may be, for example, red Green Blue (RGB) color systems, that is, the pixels may be classified into Red (Red, R) pixels, green (Green, G) pixels, blue (B) pixels, or the like, or the pixels may be classified into brightness (luminence, Y) pixels, chromaticity (U) pixels, and concentration (Chroma, V) pixels according to brightness, chromaticity, and concentration, which is not limited.
In the embodiment of the disclosure, the pixels in the image sensor 101 may be arranged according to a certain arrangement rule, for example, in Bayer array, as shown in fig. 3, to configure four lenses 102 for the image sensor 101, the pixels in the image sensor 101 are arranged in red, green and blue (Red Green Green Blue, RGGB) arrangement, when the four lenses 102 capture area images and provide the images to the image sensor 101, the field centers of the different lenses are respectively aligned with different kinds of pixels in the imaging area, for example, in fig. 3, the field center of view of the lens 1 is aligned with red pixels, the field center of view of the lens 2 is aligned with green pixels, the field center of view of the lens 3 is aligned with green pixels different from the lens 2, and the field center of view of the lens 4 is aligned with blue pixels. Therefore, the types of the central pixels of different imaging areas corresponding to the lenses are equal to the types of the pixels in the image sensor, and the central pixels of the different imaging areas are not completely identical, so that the types of each pixel in the image sensor are ensured to have the corresponding central pixel, and the imaging effect of the camera module is effectively improved.
In some embodiments of the present disclosure, for a first lens and a second lens of at least two lenses 101 that are adjacent in either first direction, a first center pixel corresponding to a first lens imaging region is different from a second center pixel corresponding to a second lens imaging region, or an adjacent pixel arrangement adjacent to the first center pixel is different from an adjacent pixel arrangement adjacent to the second center pixel.
The first direction is any one of a transverse direction, a vertical direction and a diagonal direction, and the first lens and the second lens are two adjacent lenses in any one first direction.
For example, as shown in fig. 2, if the first lens is lens 1, the second lens is lens 2, the first lens and the second lens are two lenses adjacent in the transverse direction, if the first lens is lens 1, the second lens is lens 3, the first lens and the second lens are two lenses adjacent in the vertical direction, if the first lens is lens 1, the second lens is lens 4, and the first lens and the second lens are two lenses adjacent in the diagonal direction.
The adjacent pixel arrangement mode is an arrangement mode of other pixels adjacent to a certain pixel with the pixel as a center, and can be called an adjacent pixel arrangement mode.
In some embodiments, the first lens has a first lens imaging area corresponding thereto in the image sensor 101, the first lens imaging area having a first center pixel, and the second lens has a second lens imaging area corresponding thereto in the image sensor 101, the second lens imaging area having a second center pixel, the first center pixel being different from the second center pixel.
In other embodiments, the arrangement of the adjacent pixels corresponding to the first central pixel is different from the arrangement of the adjacent pixels corresponding to the second central pixel, which is not limited.
For example, as shown in fig. 3, in the bayer array, the arrangement of the pixels is red, green, blue (Red Green Green Blue, RGGB) arrangement, if the first lens is lens 1, the second lens is lens 2, the first central pixel is a red pixel (lens 1 central pixel), the second central pixel is a green pixel (lens 2 central pixel), the first central pixel is different from the second central pixel, if the first lens is lens 2, the second lens is lens 3, the first central pixel is one of the green pixels (lens 2 central pixel), the second central pixel is the other green pixel (lens 3 central pixel), and the adjacent pixel arrangement of the first central pixel is different from the adjacent pixel arrangement of the second central pixel.
In some embodiments of the present disclosure, for any adjacent and common side first and second imaging regions, a distance between a center of field of view of a third lens corresponding to the first imaging region and a center of field of view of a fourth lens corresponding to the second imaging region is less than or equal to a sum of a third lens half-side length and a fourth lens half-side length.
The third lens half side length is half of the distance of the farthest point in the orthographic projection image formed by the third lens, the fourth lens half side length is half of the distance of the farthest point in the orthographic projection image formed by the fourth lens, as shown in fig. 2, the lens 3 is used as the third lens, the lens 4 is used as the fourth lens, the imaging areas corresponding to the third lens and the fourth lens are adjacent and share a side, and the third lens half side length and the fourth lens half side length are shown in fig. 2.
In this embodiment of the present disclosure, the center of the field of view of the third lens is the position corresponding to the center pixel of the third lens, and the center of the field of view of the fourth lens is the position corresponding to the center pixel of the fourth lens, as shown in fig. 3, when the first imaging region is adjacent to the second imaging region and shares the side, the pixels corresponding to the center of the field of view of the third lens in the first imaging region and the pixels corresponding to the center of the field of view of the fourth lens in the second imaging region are labeled with triangle symbols respectively, and then, in combination with fig. 2 and fig. 3, the distance between the center of the field of view of the third lens in the first imaging region and the center of the field of view of the fourth lens in the second imaging region is smaller than or equal to the sum of the half side length of the third lens and the half side length of the fourth lens, that is, in this embodiment, the distance between the third lens field of view center of view of the third lens and the half side length of the fourth lens in the first imaging region is smaller than or equal to the sum of half side length of the fourth lens, so that the compact arrangement between the plurality of lenses 102 can be effectively ensured, and the image sensor 101 can be further, the image sensor 101 can be effectively reduced because of the image sensor 101 has the image sensor with the largest possible image sensor area.
In the camera module set provided in this embodiment, through the fact that each lens corresponds to a different imaging area of the image sensor, the central pixels of the different imaging areas are not identical, the types of the central pixels of the different imaging areas are equal to the types of pixels in the image sensor, for a first lens and a second lens adjacent in any first direction in at least two lenses, the first central pixel of the imaging area of the corresponding first lens is different from the second central pixel of the imaging area of the corresponding second lens, or the arrangement mode of adjacent pixels adjacent to the first central pixel is different from the arrangement mode of adjacent pixels adjacent to the second central pixel, for the first imaging area and the second imaging area which are arbitrarily adjacent and share a side, the distance between the field of view center of a third lens corresponding to the first imaging area and the field of view center of a fourth lens corresponding to the second imaging area is smaller than or equal to the sum of half side lengths of the third lens and half side lengths of the fourth lens, so that a plurality of lenses can be properly arranged in the image sensor, and meanwhile, the image output effect and the image sensor can be effectively kept, and the image detail quality can be effectively generated.
Fig. 4 is a flowchart of an image processing method according to an embodiment of the disclosure.
It should be noted that, the main execution body of the image processing method in this embodiment is an image processing apparatus, and the apparatus may be implemented in software and/or hardware, and the apparatus may be configured in an electronic device, where the electronic device may be a hardware device having various operating systems and imaging devices, such as a mobile phone, a tablet computer, a personal digital assistant, and a wearable device, which is not limited thereto.
It should be noted that, in the embodiment of the disclosure, the signals and data related to the image processing are obtained after the authorization of the related user, and the obtaining process accords with the rules of the related laws and regulations and does not violate the popular regulations.
As shown in fig. 4, the image processing method includes:
s401: the method comprises the steps of obtaining an original image, wherein the original image comprises a plurality of area images, each area image corresponds to different imaging areas of an image sensor, center pixels of the different imaging areas are not identical, and the center pixels are pixels corresponding to the field of view center of a lens in pixels of the imaging areas.
Wherein the image sensor may acquire light intensity and wavelength information captured by each imaging pixel in the imaging region and provide a region image that may be processed by the image signal processor ISP.
In the embodiment of the disclosure, an original image is acquired, where the original image is composed of area images captured by respective lenses, one lens corresponds to one imaging area of the image sensor, and imaging pixels in the imaging area capture information of ambient light transmitted by the respective lenses to form an area image, where the area image may be provided to the image signal processor ISP, and the image signal processor ISP triggers execution of subsequent steps.
S402: a target image to be output is generated based on the plurality of area images.
In the embodiment of the disclosure, the area images captured by different lenses can be processed to generate the target image to be output.
The image obtained by performing corresponding processing (for example, using a correlation algorithm, a model, or the like, without limitation) on the plurality of area images may be referred to as a target image, so that when the plurality of area images are processed with reference to center pixels of different imaging areas, the target image can carry personalized imaging information of the area images captured by each lens, so that the target image has higher resolution, higher image quality, and higher image details, and an effective balance between a lens layout mode and an image generation effect is realized.
In the embodiment of the disclosure, a deep-learning image processing model may be set to generate a target image to be output based on a plurality of area images using the image processing model, or a custom image processing algorithm may be used to generate a target image to be output based on a plurality of area images, which is not limited.
Taking a specific example that the camera module includes four lenses, as shown in fig. 5, fig. 5 is a schematic diagram of generating a target image according to another embodiment of the present disclosure, where the four lenses respectively acquire area images under the same scene, and perform algorithm enhancement and super-resolution algorithm processing to obtain a high-quality target image.
In this embodiment, an original image is obtained, where the original image includes a plurality of area images, each area image corresponds to a different imaging area of an image sensor, center pixels of the different imaging areas are not identical, the center pixels are pixels corresponding to a field of view center of a lens in pixels of the imaging area, a target image to be output is processed based on the plurality of area images, and when the area images captured by the different imaging areas of the image sensor are processed to generate the target image to be output, image details can be effectively reserved, image generation quality is guaranteed, and image processing effect is improved.
Fig. 6 is a flowchart of an image processing method according to another embodiment of the present disclosure.
As shown in fig. 6, the image processing method includes:
s601: the method comprises the steps of obtaining an original image, wherein the original image comprises a plurality of area images, each area image corresponds to different imaging areas of an image sensor, center pixels of the different imaging areas are not identical, and the center pixels are pixels corresponding to the field of view center of a lens in pixels of the imaging areas.
The description of S601 may be specifically referred to the above embodiments, and will not be repeated here.
S602: carrying out alignment treatment on the plurality of area images to obtain an alignment image; wherein the alignment process is used to make the pixel type at the corresponding position of each region image identical.
The alignment image is an aligned image obtained by performing alignment processing on area images shot by different lenses in the same scene according to the position of the center pixel.
For example, taking an example of configuring four lenses for an image sensor, the four lenses respectively capture an area image, the positions of the central pixels of the four lenses are different, the position of the central pixel corresponding to one of the four lenses may be set as a reference, the area images captured by the remaining three lenses are aligned to the position of the central pixel serving as a reference in combination with the positions of the central pixels of the corresponding lenses to obtain an aligned image, or the arrangement condition of the pixels in the image sensor may be determined, the area images captured by the respective lenses may be processed based on the arrangement condition of the pixels, and the area images may be processed as an aligned image in accordance with the arrangement condition of the pixels, which is not limited.
In the embodiment of the disclosure, the region image may be processed by using an image processing manner based on artificial intelligence to perform alignment processing on the region image to obtain an aligned image, or the region image may be cut by referring to a manner of cutting a pixel type combined image to obtain an aligned image, or any other possible implementation manner may be used to perform alignment processing on the region image to obtain an aligned image, which is not limited.
The alignment images obtained by the embodiment of the disclosure have the same pixel types at the corresponding positions of each alignment image, and the arrangement modes of adjacent pixels around the pixels at the corresponding positions of each alignment image are also the same, so that the subsequent alignment image processing is facilitated.
S603: a target image is obtained based on the alignment image.
In the embodiment of the disclosure, after the alignment processing is performed on the area image to obtain the alignment image, fusion processing may be performed on multiple frames of alignment images to obtain the target image, or an image information extraction technology may be used to extract information in the image to generate the target image, or technologies such as image rendering, image enhancement and the like may also be used to process the alignment image to obtain the target image, which is not limited.
In the embodiment of the disclosure, the multiple frame alignment images may be processed by using a corresponding image processing algorithm (such as a super-resolution algorithm), where the multiple frame alignment images are subjected to aliasing fusion to obtain a target image with higher resolution, or by using a deep learning image processing network, where the multiple frame alignment images are reconstructed by extracting features of the multiple frame alignment images, or by using any other possible implementation manner to process the multiple frame alignment images to obtain the target image, which is not limited.
In this embodiment, since the image details can be effectively reserved when the image processing is performed on the area images captured by different imaging areas of the image sensor to generate the target image to be output, the image processing quality is ensured, and the image processing effect is improved.
Fig. 7 is a flowchart of an image processing method according to another embodiment of the present disclosure.
As shown in fig. 7, the image processing method includes:
s701: the method comprises the steps of obtaining an original image, wherein the original image comprises a plurality of area images, each area image corresponds to different imaging areas of an image sensor, center pixels of the different imaging areas are not identical, and the center pixels are pixels corresponding to the field of view center of a lens in pixels of the imaging areas.
The description of S701 may be specifically referred to the above embodiments, and will not be repeated here.
S702: the plurality of region images are cropped, and for a third region image and a fourth region image that are adjacent in any second direction and that share sides, a first center pixel within a cropped region of the third region image has a pixel offset from a second center pixel within a cropped region of the fourth region image.
The pixel shift is such that, in the clipping region of the third region image, the pixel corresponding to the second center pixel position is located at a position shifted from the first center pixel to the same type of pixel as the second center pixel in the second direction.
In the embodiment of the present disclosure, the distance of the pixel offset may be set to be the length of one pixel, or other distances of the pixel offset with any length (for example, the length of half a pixel, the length of two pixels, etc.) may also be set according to actual requirements, which is not limited.
The second direction may be one of a transverse direction or a vertical direction, that is, the second direction may be a transverse direction, or may be a vertical direction, which is not limited thereto.
Optionally, in the embodiment of the present disclosure, in a clipping area of each area image, a central pixel of the area image is located at a central position of the clipping area of the area image, or the central pixel of the area image is closer to the central position of the area than a non-central pixel of the area image, where the non-central pixel of the area image is a pixel in the area image except for the central pixel, and because the position of the central pixel is set, the central pixel corresponding to the clipping area can be accurately represented, so that the central pixel is located at the central position of the clipping area as much as possible, determining efficiency and effect of the central pixel in the clipping area are effectively ensured, and the effect of image processing is effectively enhanced.
That is, the center pixel of the area image may be set at the center position of the clipping area of the area image, or the center pixel of the area image of the clipping area may be set at a position closer to the center position of the area than the non-center pixel of the area image.
For example, the size of the clipping region is five pixels by five pixels, and the pixel at the position of the third row and the third column in the clipping region can be used as the center pixel, which is not limited.
In the embodiment of the disclosure, as shown in fig. 8, fig. 8 is a schematic diagram of pixel offset according to another embodiment of the disclosure, taking the number of area images as 4 as an example, by taking an area in an imaginary circle as a clipping area, a first central pixel is a pixel corresponding to a field of view center of a first area image, and a second central pixel is a pixel corresponding to a field of view center of a second area image.
S703: and carrying out alignment treatment on the image obtained after cutting to obtain an alignment image.
In the embodiment of the disclosure, the image obtained after clipping may be aligned according to the pixels corresponding to the same position, so as to obtain an aligned image.
In some embodiments of the present disclosure, the image processing algorithm may be used to perform clipping of the area image and alignment processing of the image obtained after clipping, or an image processing big data model may be set, and alignment processing may be performed on the image obtained after clipping based on a big data model technology to obtain an aligned image, or various technologies such as a feature extraction technology and an image recognition technology may be used to perform alignment processing on the image obtained after clipping to obtain an aligned image, which is not limited.
In this embodiment, because the image details can be effectively reserved when the image processing is performed on the area images captured by different imaging areas of the image sensor to generate the target image to be output, the image processing quality is ensured, and the image processing effect is improved.
Fig. 9 is a flowchart of an image processing method according to another embodiment of the present disclosure.
As shown in fig. 9, the image processing method includes:
s901: the method comprises the steps of obtaining an original image, wherein the original image comprises a plurality of area images, each area image corresponds to different imaging areas of an image sensor, center pixels of the different imaging areas are not identical, and the center pixels are pixels corresponding to the field of view center of a lens in pixels of the imaging areas.
S902: the plurality of region images are cropped, and for a third region image and a fourth region image that are adjacent in any second direction and that share sides, a first center pixel within a cropped region of the third region image has a pixel offset from a second center pixel within a cropped region of the fourth region image.
The descriptions of S901-S902 may be specifically referred to the above embodiments, and are not repeated herein.
S903: and generating an optical flow characteristic map corresponding to the image obtained after clipping.
In the embodiment of the disclosure, the image obtained after clipping may be processed to generate an optical flow feature map.
The images obtained after different cropping may have the same or different optical flow characteristics, and a corresponding optical flow characteristic map may be generated based on the optical flow characteristics corresponding to the image obtained after cropping.
According to the embodiment of the disclosure, the optical flow network can be built, and the optical flow characteristic diagram corresponding to the cut image can be obtained by processing the cut image through the optical flow network.
For example, the image obtained after clipping may be processed by using a deep-learning optical flow network to generate an optical flow feature map corresponding to the image obtained after clipping, which is not limited, in some embodiments of the present disclosure, the image obtained after clipping may be subjected to downsampling, and if pixels of the image obtained after clipping are arranged in Red, green and Blue (Red Green Green Blue, RGGB) arrangement, the image obtained after clipping may be changed into an image with downsampled Red, green and Blue (RGB) pixels by discarding one Green (G) pixel, so as to facilitate generation of the optical flow feature map, and of course, the image obtained after clipping may also be processed by using any other possible manner to generate the optical flow feature map, which is not limited.
S904: and carrying out up-sampling processing on the light flow characteristic diagram to obtain an up-sampling image.
The image obtained by upsampling the light flow characteristic map may be referred to as an upsampled image.
In the embodiment of the disclosure, for the optical flow feature map output by the optical flow network, an up-sampling image may be generated by using a bilinear interpolation up-sampling mode, or the optical flow feature map may be processed by using methods such as nearest neighbor up-sampling, bicubic interpolation up-sampling, and the like, so as to generate the up-sampling image, which is not limited.
S905: and carrying out alignment processing on the up-sampling image to obtain an alignment image.
In the embodiment of the disclosure, the multi-frame up-sampling image obtained by up-sampling may be subjected to alignment processing to obtain an aligned image.
For example, as shown in fig. 10, fig. 10 is a schematic diagram of an image alignment manner in the embodiment of the disclosure, a set of area images is selected as reference area images, other area images are cropped based on the reference area images, the images obtained after cropping are processed by downsampling, the processed images are input into an optical flow network to generate an optical flow feature map, the optical flow feature map is processed by bilinear interpolation upsampling to generate an upsampled image, and the upsampled image is aligned to obtain an aligned image.
S906: a plurality of pixels of the same kind are extracted from the aligned image, wherein the pixels have corresponding pixel positions in the aligned image.
In the embodiment of the disclosure, the pixel recognition method may be used to extract a plurality of pixels of the same kind from the aligned image and record the corresponding pixel positions, or the pixel positions corresponding to the plurality of pixels of the same kind may be predetermined, and the plurality of pixels of the same kind may be extracted from the aligned image according to the pixel positions, or the plurality of pixels of the same kind may be extracted from the aligned image by using any other possible implementation manner, which is not limited.
For example, if the pixel arrangement is RGGB arrangement, a plurality of pixels of a red type in an aligned image may be extracted, and the pixel positions corresponding to the pixels for extracting the red type may be recorded, and similarly, a plurality of pixels of a green type and a plurality of pixels of a blue type may be extracted, which is not limited.
S907: and combining a plurality of pixels according to the pixel positions to obtain an image to be fused.
The image obtained by combining the plurality of pixels may be referred to as an image to be fused, and the image to be fused may be used for fusing into a target image.
Optionally, in some embodiments, a plurality of pixels may be combined according to the pixel positions to obtain a combined image, an image semantic feature corresponding to the combined image is determined, and up-sampling processing is performed on the combined image according to the image semantic feature to obtain an image to be fused.
The feature of the image semantic dimension of the combined image may be referred to as an image semantic feature, and the image semantic feature may be a feature of texture, color, or the like of the combined image, or may be a depth feature corresponding to the combined image, which is not limited.
In the embodiment of the disclosure, the depth feature may be determined as the image semantic feature corresponding to the combined image by using a corresponding image feature extraction network, or the combined image may be subjected to semantic recognition by using an image recognition manner to determine the image semantic feature, or any other possible implementation manner may be used to determine the image semantic feature corresponding to the combined image, which is not limited.
For example, if the article a and the article B are displayed in a combined image, the contour features for representing the article a and the contour features for representing the article B may be extracted by image processing or the like, and the contour features may be used as image semantic features of the combined image, or depth features corresponding to the combined image may be extracted as image semantic features through a deep learning network (for example, a semantic segmentation algorithm network with a "U" structure).
In the embodiment of the disclosure, the image semantic features corresponding to the combined image may be extracted by using a fusion network for deep learning, and the combined image may be respectively subjected to upsampling processing by using a pixel reorganization (pixel buffer) technology and the image semantic features so as to determine an image to be fused, or an image fusion processing model may also be used, and upsampling processing may also be performed on the combined image based on the model to obtain the image to be fused, or upsampling processing may also be performed on the combined image by using any other possible implementation manner to obtain the image to be fused, which is not limited.
Alternatively, in other embodiments, a feature extraction system may be set up, and the image semantic features corresponding to the combined image may be extracted by using a corresponding algorithm model, which is not limited in this regard.
S908: and fusing the multi-frame images to be fused to obtain the target image.
In the embodiment of the disclosure, the multi-frame image to be fused is fused, and the multi-frame image to be fused can be directly fused by using an image processing mode, or the multi-frame image to be fused can be fused by using a deep learning network, or the multi-frame image to be fused can be fused by adopting any other possible mode, so that the target image is obtained, and the method is not limited.
In this disclosure, a processing procedure of an aligned image may be shown in fig. 11, where fig. 11 is a schematic diagram of an image fusion manner in the embodiment of the disclosure, where, for an aligned image, a combined image may be formed according to the same kind of pixels, then, a fusion network based on deep learning is adopted, image semantic features corresponding to the combined image are extracted through a network model (for example, a semantic segmentation algorithm network, a deep learning network, etc.), then, a pixel rebinning (pixel buffer) technology and an up-sampling process are used to output image semantic features, and a convolution layer is used to reconstruct a high-quality high-resolution target image, if the aligned image is 4 sheets, dimensions of the input aligned image may be set to be (2H, 2W, 4), where "H" represents a length of the aligned image, "W" represents a width of the aligned image, and "4" represents a number of aligned images, thus, an image dimension of the combined image obtained through processing is (H, W, 16), image semantic features are extracted through the network model, a generated dimension is (H, W,2s×2s×512 s), and the convolved image is subjected to the convolution layer, and then, the image is reconstructed with high-quality high-resolution image is reconstructed.
For example, as shown in fig. 12, fig. 12 is a comparison chart of pixel shift processing results according to another embodiment of the disclosure, and as can be seen from comparison of fig. 12, the target image quality and resolution obtained by using the pixel shift technology are higher, and the image processing effect is better.
In this embodiment, because the image details can be effectively reserved when the image processing is performed on the area images captured by different imaging areas of the image sensor to generate the target image to be output, the image processing quality is ensured, and the image processing effect is improved. Because the optical flow characteristic extraction and generation and the up-sampling processing are performed on the image obtained after cutting, and then the image alignment is performed, the image details can be effectively reserved in the process of aligning the image obtained after cutting, the loss of the image details brought in the process of the alignment processing is avoided, the consumption of processing resources required by the image alignment processing can be reduced while the image details are effectively reserved, and the balance of the alignment processing effect and the alignment processing efficiency is realized. The combined image is processed according to the semantic features of the images to obtain the image to be fused, so that the influence of the noise points of the images to be fused can be effectively reduced, the image quality of the image to be fused is further improved, and the image processing effect is enhanced.
Fig. 13 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 13, the image processing apparatus 130 includes:
an obtaining module 1301, configured to obtain an original image, where the original image includes a plurality of area images, each area image corresponds to a different imaging area of the image sensor, central pixels of the different imaging areas are not identical, and the central pixels are pixels corresponding to a field of view center of the lens in pixels of the imaging area;
a generating module 1302 is configured to generate a target image to be output based on the plurality of area images.
In some embodiments of the present disclosure, as shown in fig. 14, fig. 14 is a schematic structural diagram of an image processing apparatus according to another embodiment of the present disclosure, and a generating module 1302 includes:
a first processing sub-module 13021, configured to perform alignment processing on the multiple area images to obtain an aligned image; the alignment processing is used for enabling the pixel types at the corresponding positions of the image positions of each region to be the same;
a second processing sub-module 13022 for deriving a target image based on the aligned image.
In some embodiments of the present disclosure, as shown in fig. 14, the first processing sub-module 13021 is specifically configured to:
Clipping the plurality of region images, and for a third region image and a fourth region image that are adjacent in any second direction and share sides, a first center pixel within a clipping region of the third region image has a pixel offset relative to a second center pixel within a clipping region of the fourth region image;
carrying out alignment treatment on the image obtained after cutting to obtain an alignment image;
the pixel shift is such that, in the clipping region of the third region image, the pixel corresponding to the second center pixel position is located at a position shifted from the first center pixel to the same type of pixel as the second center pixel in the second direction.
In some embodiments of the present disclosure, as shown in fig. 14, the first processing sub-module 13021 is specifically configured to:
generating an optical flow characteristic diagram corresponding to the image obtained after cutting;
performing up-sampling processing on the light flow characteristic diagram to obtain an up-sampling image;
and carrying out alignment processing on the up-sampling image to obtain an alignment image.
In some embodiments of the present disclosure, as shown in fig. 14, within the cropped region of each region image, the center pixel of the region image is located at the center position of the cropped region of the region image, or the center pixel of the region image is closer to the region center position than the non-center pixel of the region image;
Wherein the non-center pixel of the area image is a pixel other than the center pixel in the area image.
In some embodiments of the present disclosure, as shown in fig. 14, the second processing sub-module 13022 is specifically configured to:
extracting a plurality of pixels of the same kind from the alignment image, wherein the pixels have corresponding pixel positions in the alignment image;
combining a plurality of pixels according to the pixel positions to obtain an image to be fused;
and fusing the multi-frame images to be fused to obtain the target image.
In some embodiments of the present disclosure, as shown in fig. 14, the second processing sub-module 13022 is specifically configured to:
combining a plurality of pixels according to the pixel positions to obtain a combined image;
determining image semantic features corresponding to the combined image;
and carrying out up-sampling processing on the combined image according to the semantic features of the image so as to obtain an image to be fused.
Corresponding to the image processing method provided by the embodiments of fig. 4 to 12 described above, the present disclosure also provides an image processing apparatus, and since the image processing apparatus provided by the embodiments of the present disclosure corresponds to the image processing method provided by the embodiments of fig. 4 to 12 described above, the implementation of the image processing method is also applicable to the image processing apparatus provided by the embodiments of the present disclosure, and will not be described in detail in the embodiments of the present disclosure.
In this embodiment, an original image is obtained, where the original image includes a plurality of area images, each area image corresponds to a different imaging area of an image sensor, center pixels of the different imaging areas are not identical, the center pixels are pixels corresponding to a field of view center of a lens in pixels of the imaging area, a target image to be output is generated based on the plurality of area images, and when image processing is performed on the area images captured by the different imaging areas of the image sensor to generate the target image to be output, image details can be effectively reserved, image processing quality is guaranteed, and image processing effect is improved.
Fig. 15 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure.
As shown in fig. 15, the terminal 150 includes a camera module 10.
Fig. 16 illustrates a block diagram of an exemplary electronic device suitable for use in implementing embodiments of the present disclosure. The electronic device 12 shown in fig. 16 is merely an example and should not be construed to limit the functionality and scope of use of embodiments of the present disclosure in any way.
As shown in fig. 16, the electronic device 12 is in the form of a general purpose computing device. Components of the electronic device 12 may include, but are not limited to: one or more processors or processing units 17, a system memory 28, a bus 18 that connects the various system components, including the system memory 28 and the processing units 17. Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include industry Standard architecture (Industry Standard Architecture; hereinafter ISA) bus, micro channel architecture (Micro Channel Architecture; hereinafter MAC) bus, enhanced ISA bus, video electronics standards Association (Video Electronics Standards Association; hereinafter VESA) local bus, and peripheral component interconnect (Peripheral Component Interconnection; hereinafter PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 28 may include computer system readable media in the form of volatile memory, such as random access memory (Random Access Memory; hereinafter: RAM) 30 and/or cache memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 16, commonly referred to as a "hard disk drive").
Although not shown in fig. 16, a magnetic disk drive for reading from and writing to a removable nonvolatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable nonvolatile optical disk (e.g., a compact disk read only memory (Compact Disc Read Only Memory; hereinafter CD-ROM), digital versatile read only optical disk (Digital Video Disc Read Only Memory; hereinafter DVD-ROM), or other optical media) may be provided. In such cases, each drive may be coupled to bus 18 through one or more data medium interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of the various embodiments of the disclosure.
A program/utility 40 having a set (at least one) of program modules 42 may be stored in, for example, memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 42 generally perform the functions and/or methods in the embodiments described in this disclosure.
The electronic device 12 may also communicate with one or more external devices 15 (e.g., keyboard, pointing device, display 24, etc.), one or more devices that enable a user to interact with the electronic device 12, and/or any devices (e.g., network card, modem, etc.) that enable the electronic device 12 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks, such as a local area network (Local Area Network; hereinafter: LAN), a wide area network (Wide Area Net work; hereinafter: WAN) and/or a public network, such as the Internet, via the network adapter 20. As shown, the network adapter 20 communicates with other modules of the electronic device 12 over the bus 18. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 12, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processing unit 17 executes various functional applications and data processing by running a program stored in the system memory 28, for example, implementing the image processing method mentioned in the foregoing embodiment.
In order to achieve the above embodiments, the present disclosure further provides an electronic device, including: the image processing method of the foregoing embodiments of the present disclosure is implemented when the processor executes the computer program.
In order to implement the above-described embodiments, the present disclosure also proposes a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements an image processing method as proposed in the foregoing embodiments of the present disclosure.
In order to implement the above-described embodiments, the present disclosure also provides a computer program product comprising a computer program which, when executed by a processor, implements the image processing method of the foregoing embodiments of the present disclosure.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer programs. When the computer program is loaded and executed on a computer, the flow or functions described in accordance with the embodiments of the present disclosure are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer program may be stored in or transmitted from one computer readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means from one website, computer, server, or data center. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a high-density digital video disc (digital video disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Those of ordinary skill in the art will appreciate that: the various numbers of first, second, etc. referred to in this disclosure are merely for ease of description and are not intended to limit the scope of embodiments of this disclosure, nor to indicate sequencing.
At least one of the present disclosure may also be described as one or more, a plurality may be two, three, four or more, and the present disclosure is not limited. In the embodiment of the disclosure, for a technical feature, the technical features in the technical feature are distinguished by "first", "second", "third", "a", "B", "C", and "D", and the technical features described by "first", "second", "third", "a", "B", "C", and "D" are not in sequence or in order of magnitude.
The correspondence relationships shown in the tables in the present disclosure may be configured or predefined. The values of the information in each table are merely examples, and may be configured as other values, and the present disclosure is not limited thereto. In the case of the correspondence between the configuration information and each parameter, it is not necessarily required to configure all the correspondence shown in each table. For example, in the table in the present disclosure, the correspondence shown by some rows may not be configured. For another example, appropriate morphing adjustments, e.g., splitting, merging, etc., may be made based on the tables described above. The names of the parameters indicated in the tables may be other names which are understood by the communication device, and the values or expressions of the parameters may be other values or expressions which are understood by the communication device. When the tables are implemented, other data structures may be used, for example, an array, a queue, a container, a stack, a linear table, a pointer, a linked list, a tree, a graph, a structure, a class, a heap, a hash table, or a hash table.
Predefined in this disclosure may be understood as defining, predefining, storing, pre-negotiating, pre-configuring, curing, or pre-sintering.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
The foregoing is merely specific embodiments of the disclosure, but the protection scope of the disclosure is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the disclosure, and it is intended to cover the scope of the disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
Claims (15)
- A camera module is characterized by comprising an image sensor and at least two lenses;each lens corresponds to different imaging areas of the image sensor, and central pixels of the different imaging areas are not identical;the center pixel is a pixel corresponding to the center of the field of view of the lens corresponding to the imaging area among the pixels of the imaging area.
- The camera module of claim 1, wherein the camera module comprises a camera module having a camera module body,the kind of the center pixel of the different imaging areas is equal to the kind of the pixels in the image sensor.
- The camera module of claim 1 or 2, wherein the camera module comprises a camera module having a camera module body,for a first lens and a second lens adjacent to each other in any first direction of the at least two lenses, a first central pixel corresponding to the first lens imaging region is different from a second central pixel corresponding to the second lens imaging region, or an adjacent pixel arrangement manner adjacent to the first central pixel is different from an adjacent pixel arrangement manner adjacent to the second central pixel.
- A camera module as recited in any one of claims 1-3, wherein,for any adjacent first imaging area and second imaging area which share a side edge, the distance between the field of view center of a third lens corresponding to the first imaging area and the field of view center of a fourth lens corresponding to the second imaging area is smaller than or equal to the sum of the half side length of the third lens and the half side length of the fourth lens;The third lens is half the distance of the farthest point in the orthographic formed by the third lens, and the fourth lens is half the distance of the farthest point in the orthographic formed by the fourth lens.
- An image processing method, comprising:acquiring an original image, wherein the original image comprises a plurality of area images, each area image corresponds to a different imaging area of an image sensor, the central pixels of the different imaging areas are not identical, and the central pixels are pixels corresponding to the field of view center of a lens in the pixels of the imaging areas;a target image to be output is generated based on the plurality of area images.
- The image processing method according to claim 5, wherein the generating the target image to be output based on the plurality of area images includes:performing alignment processing on the plurality of region images to obtain an alignment image; the alignment processing is used for enabling the pixel types at the corresponding positions of the image positions of each region to be the same;and obtaining the target image based on the alignment image.
- The image processing method according to claim 6, wherein the performing alignment processing on the plurality of area images to obtain an aligned image includes:Clipping the plurality of region images, and for a third region image and a fourth region image that are adjacent in any second direction and share sides, a first center pixel within a clipping region of the third region image has a pixel offset relative to a second center pixel within a clipping region of the fourth region image;carrying out alignment treatment on the cut image to obtain an alignment image;the pixel shift is that, in the clipping region of the third region image, a pixel corresponding to the second center pixel position is located at a position shifted from the first center pixel to the same type of pixel as the second center pixel along the second direction.
- The image processing method according to claim 7, wherein the performing an alignment process on the image obtained after clipping to obtain the aligned image includes:generating an optical flow characteristic diagram corresponding to the cut image;performing up-sampling processing on the optical flow characteristic map to obtain an up-sampled image;and carrying out alignment processing on the up-sampling image to obtain the alignment image.
- The image processing method according to claim 7, wherein,In the clipping region of each region image, the central pixel of the region image is positioned at the central position of the clipping region of the region image, or the central pixel of the region image is closer to the central position of the region relative to the non-central pixel of the region image;wherein the non-center pixel of the area image is a pixel other than the center pixel in the area image.
- The image processing method according to claim 6, wherein the obtaining the target image based on the alignment image includes:extracting a plurality of pixels of the same kind from the aligned image, wherein the pixels have corresponding pixel positions in the aligned image;combining the pixels according to the pixel positions to obtain an image to be fused;and fusing a plurality of frames of images to be fused to obtain the target image.
- The image processing method according to claim 10, wherein the combining the plurality of pixels according to the pixel positions to obtain the image to be fused includes:combining the plurality of pixels according to the pixel locations to obtain a combined image;determining image semantic features corresponding to the combined image;And carrying out up-sampling processing on the combined image according to the image semantic features to obtain the image to be fused.
- An image processing apparatus, comprising:the system comprises an acquisition module, a lens and a lens, wherein the acquisition module is used for acquiring an original image, the original image comprises a plurality of area images, each area image corresponds to different imaging areas of an image sensor, central pixels of the different imaging areas are not identical, and the central pixels are pixels corresponding to the field of view center of the lens in pixels of the imaging areas;and the generation module is used for generating a target image to be output based on the plurality of area images.
- A terminal comprising the camera module of any one of claims 1-4.
- An electronic device, comprising:a camera module;at least one processor; anda memory communicatively coupled to the at least one processor; wherein,the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image processing method of any one of claims 5-11.
- A non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are for causing the computer to perform the image processing method of any one of claims 5-11.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2022/097972 WO2023236162A1 (en) | 2022-06-09 | 2022-06-09 | Camera module, image processing method and apparatus, terminal, electronic device and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117643047A true CN117643047A (en) | 2024-03-01 |
Family
ID=89117410
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202280004460.4A Pending CN117643047A (en) | 2022-06-09 | 2022-06-09 | Camera module, image processing method and device, terminal, electronic equipment and medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN117643047A (en) |
WO (1) | WO2023236162A1 (en) |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005176040A (en) * | 2003-12-12 | 2005-06-30 | Canon Inc | Imaging device |
CN102547080B (en) * | 2010-12-31 | 2015-07-29 | 联想(北京)有限公司 | Camera module and comprise the messaging device of this camera module |
JP5956782B2 (en) * | 2011-05-26 | 2016-07-27 | キヤノン株式会社 | Imaging device and imaging apparatus |
JP6029459B2 (en) * | 2012-12-26 | 2016-11-24 | 三菱電機株式会社 | Image composition apparatus and image composition method |
CN109963082B (en) * | 2019-03-26 | 2021-01-08 | Oppo广东移动通信有限公司 | Image shooting method and device, electronic equipment and computer readable storage medium |
CN112104796B (en) * | 2019-06-18 | 2023-10-13 | Oppo广东移动通信有限公司 | Image processing methods and devices, electronic equipment, computer-readable storage media |
CN112532839B (en) * | 2020-11-25 | 2022-05-27 | 深圳市锐尔觅移动通信有限公司 | Camera module, imaging method, imaging device and mobile equipment |
CN113156656B (en) * | 2021-03-31 | 2023-05-16 | 杭州光影写科技有限公司 | Optical axis correction method for zoom camera |
-
2022
- 2022-06-09 WO PCT/CN2022/097972 patent/WO2023236162A1/en active Application Filing
- 2022-06-09 CN CN202280004460.4A patent/CN117643047A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
WO2023236162A1 (en) | 2023-12-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10645368B1 (en) | Method and apparatus for estimating depth of field information | |
CN110428366B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN111898701B (en) | Model training, frame image generation and frame insertion methods, devices, equipment and media | |
TWI430184B (en) | Edge mapping incorporating panchromatic pixels | |
CN101779472B (en) | Image data generating apparatus and method | |
US20150104113A1 (en) | Method and Apparatus for Producing a Cinemagraph | |
JP5779089B2 (en) | Edge detection apparatus, edge detection program, and edge detection method | |
CN112991242A (en) | Image processing method, image processing apparatus, storage medium, and terminal device | |
CN104641625B (en) | Image processing apparatus, camera device and image processing method | |
CN110276831B (en) | Method and device for constructing three-dimensional model, equipment and computer-readable storage medium | |
US12094079B2 (en) | Reference-based super-resolution for image and video enhancement | |
JP4556813B2 (en) | Image processing apparatus and program | |
CN112861960B (en) | Image tampering detection method, system and storage medium | |
CN113628134B (en) | Image noise reduction method and device, electronic equipment and storage medium | |
CN105339954A (en) | System and method for single-frame based super resolution interpolation for digital cameras | |
WO2023226218A1 (en) | Axisymmetric optical imaging parallel simulation method and apparatus | |
WO2021008322A1 (en) | Image processing method, apparatus, and device | |
CN114299105B (en) | Image processing method, device, computer equipment and storage medium | |
CN112150363B (en) | Convolutional neural network-based image night scene processing method, computing module for operating method and readable storage medium | |
Jia et al. | Learning rich information for quad bayer remosaicing and denoising | |
US20240144431A1 (en) | Generating Super Resolution Images using Duo-Camera Artificial Reality Device | |
CN117643047A (en) | Camera module, image processing method and device, terminal, electronic equipment and medium | |
CN117768774A (en) | Image processor, image processing method, photographing device and electronic device | |
CN117934301A (en) | Image processing method, device, electronic device and computer readable storage medium | |
US20230237651A1 (en) | Multi-channel extended depth-of-field method for automated digital cytology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |