Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, and in order to provide a better understanding of the present invention. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. This invention can be embodied in many different forms than those herein described and many modifications may be made by those skilled in the art without departing from the spirit of the invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Fig. 1 is a schematic diagram of an application environment of the camera module calibration method in one embodiment. As shown in fig. 1, the application environment includes an electronic device 110 having a first camera module 111, a second camera module 112, and a depth camera module 113. The mechanical arrangement of the first camera module 111, the second camera module 112 and the depth camera module 113 may be: the first camera module 111, the second camera module 112, and the depth camera module 113 are arranged in sequence, as shown in fig. 1 a; or the first camera module 111, the depth camera module 113 and the second camera module 112 are arranged in sequence, as shown in fig. 1 b; or the second camera module 112, the first camera module 111, and the depth camera module 113 are arranged in sequence, as shown in fig. 1 c; or the second camera module 112, the depth camera module 113 and the first camera module 111 are arranged in sequence (not shown in the figure); or the depth camera module 113, the second camera module 112, and the first camera module 111 are arranged in sequence (not shown in the figure); or a depth camera module 113, a first camera module 111, and a second camera module 112 (not shown).
The first camera module 111 and the second camera module 112 are any camera modules in the prior art, and are not limited herein. For example, the first Camera module 111 and the second Camera module 112 may be visible light Camera modules (RGB cameras). The first camera module 111 and the second camera module 112 acquire RGB images using RGB modules. The depth camera module 113 is a Time of flight (TOF) camera or a structured light camera.
Fig. 2 is a flowchart of a camera module calibration method according to an embodiment of the present invention, and the camera module calibration method in this embodiment is described by taking the electronic device in fig. 1 as an example. As shown in fig. 2, the camera module calibration method includes steps 201 to 204.
Step 201, in the same scene, acquiring a first image through a first camera module, acquiring a second image of the scene through a second camera module, and acquiring a depth image through a depth camera module;
the user selects a scene chart1, the electronic device utilizes the first camera module, the second camera module and the depth camera module to shoot a chart1 at the same angle, the first camera module shoots a chart1 to obtain a first image, the second camera module shoots a chart1 to obtain a second image, and the depth camera module shoots a chart1 to obtain a depth image. The first camera module 111 and the second camera module 112 acquire RGB images using RGB modules. The depth camera module 113 is a Time of flight (TOF) camera or a structured light camera. The structured light camera projects controllable light spots, light bars or light surface structures to the surface of the measured object; and receives reflected light of a controllable light spot, light bar or smooth structure, and obtains a depth image according to the deformation amount of the emitted light. The TOF camera transmits near infrared light to a scene; receiving the reflected near infrared rays, and acquiring depth information of a scene by calculating the time difference or phase difference of the reflected near infrared rays; and representing the outline of the scene with different colors for different distances to acquire a depth image.
Step 202, a first obtaining module, configured to extract the same pixel points of the first image and the second image, determine parallax information, and calculate a first depth according to the parallax information;
image identification is a process of classification that distinguishes images from other different classes of images. Extracting pixel points of a first image and a second image by using a Scale-invariant feature transform (SIFT) method or a Speeded Up Robust Features (SURF) method, matching the pixel points extracted from the first image with the pixel points extracted from the second image by using a stereo matching algorithm to obtain a matched pixel point image, acquiring parallax information of a scene chart1, and converting the parallax information into depth information by calculating according to a triangulation distance measuring principle.
The SIFT is an algorithm of machine vision, which is used for detecting and describing local features in an image, and searches extreme points in a spatial scale and extracts invariant positions, scales and rotations of the extreme points, and the application range of the SIFT comprises object identification, robot map perception and navigation, image stitching, 3D model establishment, gesture identification, image tracking and action comparison.
SURF is a feature point extraction algorithm proposed by H Bay following SIFT algorithm, and the method performs block feature matching on the basis of SURF using an image integration technology, so that the calculation speed is further accelerated; meanwhile, a feature descriptor generated based on a second-order multi-scale template is used, and the robustness of feature point matching is improved.
The stereo matching algorithm is one of the most active research subjects in the field of computer vision, and the process is as follows: firstly, calculating matching cost, namely calculating IR (P) of each pixel point on a reference Image, matching the cost value of a corresponding point IT (pd) on a target Image by using all parallax possibilities, and storing the calculated cost value in a three-dimensional array, wherein the three-dimensional array is generally called a parallax Space Image (DSI); and then, cost aggregation, namely aggregating the matching costs in a support window by summing, averaging or other methods to obtain an accumulated cost CA (p, d) of a point p on the reference image at the parallax d, and reducing the influence of abnormal points and improving the signal-to-noise Ratio (SNR) by matching cost aggregation so as to improve the matching precision. And secondly, parallax calculation is carried out, namely a 'winner is king' strategy (WTA, WinnerTakeall) is adopted, namely a point with the optimal accumulated cost is selected in a parallax search range to serve as a corresponding matching point, and the corresponding parallax is the required parallax. And finally, respectively taking the left and right images as reference images, obtaining left and right parallax images after the three steps are finished, optimizing the parallax images, and correcting the parallax images by further executing a post-processing step. The commonly used method includes Interpolation (Interpolation), Sub-pixel Enhancement (Sub-pixel Enhancement), Refinement (Refinement), Image Filtering (Image Filtering), and the like, and the specific steps of the Interpolation are not described herein again.
The triangulation principle is the most common optical three-dimensional measurement technology, and based on traditional triangulation, the depth information of a point to be measured is calculated through angle change generated by deviation of the point relative to an optical reference line.
Step 203, determining a second depth according to the depth image; the depth image is used for describing depth information of a scene; the outline of the scene is represented in different colors for different distances, i.e. the second depth is described by the color of the depth map. The depth camera module in step 203 may be a TOF camera or a structured light camera. The structured light camera projects controllable light spots, light bars or light surface structures to the surface of the measured object; and receives reflected light of a controllable light spot, light bar or smooth structure, and obtains a depth image according to the deformation amount of the emitted light. The TOF camera transmits near infrared light to a scene; receiving the reflected near infrared rays, and acquiring depth information of a scene by calculating the time difference or phase difference of the reflected near infrared rays; and representing the outline of the scene with different colors for different distances to acquire a depth image.
And 204, comparing the first depth with the second depth, and if the difference value of the first depth and the second depth is smaller than a preset threshold value, generating a prompt signal that the calibration test is passed.
The method comprises the steps of calculating a first depth according to parallax information determined by a first image and a second image, obtaining a second depth from a depth image, further obtaining a difference absolute value of the first depth and the second depth, and comparing the difference absolute value with a preset threshold, wherein the preset threshold is set by an engineer in a calibration process of a camera module, and is not limited herein, and the setting of the preset threshold is determined according to specific conditions. If the absolute value of the depth is smaller than the preset threshold value, the actual error of the calibration result of the camera module is within the error allowable range, a calibration test passing prompt signal is further generated, the prompt signal is used for prompting a calibration test processing unit of the electronic equipment, and the calibration result of the camera module passes the calibration test.
In one embodiment, as shown in fig. 3, the method for calibrating a camera module further includes:
step 305, if the difference value between the first depth and the second depth is greater than or equal to a preset threshold value, generating a calibration test failure prompt signal; and calibrating the camera module for the second time.
The method comprises the steps of determining a first parallax in a parallax image, calculating a first parallax, obtaining a second depth from a depth image, further obtaining an absolute value of a difference value between the first depth and the second depth, and comparing the absolute value of the difference value with a preset threshold, wherein the preset threshold is set by an engineer in a calibration process of a camera module, and is not limited herein, and the setting of the preset threshold is determined according to specific conditions. If the absolute value of the parallax is larger than or equal to the preset threshold, it indicates that the actual error of the calibration result of the camera module exceeds the error allowable range, and further generates a calibration test passing failure prompt signal, the prompt signal is used for prompting a calibration test processing unit of the electronic equipment, and if the calibration result of the camera module does not pass the calibration test, the camera module needs to be calibrated for the second time. The calibration of the camera module is to restore an object in a space by using an image shot by the camera module, and the linear relation exists between the image shot by the camera and the object in a three-dimensional space, namely an image matrix is equal to a physical matrix, and the physical matrix can be regarded as a geometric model of camera imaging. The parameters in the physical matrix are camera parameters. The process of solving the parameters of the physical matrix is called camera calibration. The camera module calibration algorithm can be briefly described as follows: printing a template and attaching the template on a plane; shooting a plurality of template images from different angles; detecting characteristic points in the image; solving internal parameters and external parameters of the camera; solving distortion coefficients of the internal parameter and the external parameter; and optimizing distortion refinement.
In one embodiment, as shown in fig. 4, acquiring the depth image of the scene by the depth camera module includes: the degree of depth camera module is for having the anti-shake camera module of optical anti-shake OIS device, and the anti-shake camera module opens optical anti-shake function and obtains the degree of depth image. The optical anti-shake is to avoid or reduce the shake phenomenon of the camera or other similar imaging instruments in the process of capturing optical signals by arranging optical components, such as a lens, so as to improve the imaging quality. Optical anti-shake is the most accepted anti-shake technology by the public, and compensates the light path of the hand shake through a movable component, thereby realizing the effect of reducing the blur of the photo.
In one embodiment, the anti-shake camera module starts the optical anti-shake function and obtains the depth image, including: step 401, acquiring current jitter data of a depth camera module, wherein the jitter data comprises position change data; step 402, determining offset data of an anti-shake lens of the anti-shake camera module according to a relation between preset shake data and position change of the anti-shake lens; and step 403, adjusting the position of the anti-shake lens according to the anti-shake lens offset data, and acquiring the depth image for the second time.
In one embodiment, the anti-shake camera module starts the optical anti-shake function and obtains the depth image, including: step 401, acquiring current jitter data of a depth camera module, wherein the jitter data comprises position change data; the camera module can change position when shaking, and the shaking degree of the camera module is represented as shaking data by a quantized numerical value. The shake data includes position change data and angle change at the time of image pickup mold shake. And the jitter data may be detected by a jitter detection device. Step 402, determining offset data of an anti-shake camera lens of the anti-shake camera module according to a relation between preset shake data and position change of the anti-shake camera lens; the relationship between the preset jitter data and the anti-jitter position change of the anti-jitter lens is determined in advance according to the position relationship between a camera jitter detection device and the anti-jitter lens, the relationship between a jitter direction and an anti-jitter lens moving direction, and the relationship between a focal length, a jitter distance, a jitter angle and the anti-jitter lens moving distance. Through the relationship, the anti-shake movement data of the anti-shake lens corresponding to any shake data can be acquired. And step 403, adjusting the position of the anti-shake lens according to the anti-shake lens offset data, and acquiring the depth image for the second time. The anti-shake lens moving data comprise a moving direction and a moving distance, the position of the anti-shake lens is determined according to the moving data, and the anti-shake lens is adjusted to finish anti-shake compensation movement so that the image sensor can shoot images after anti-shake. Thereby, an anti-shake depth image can be obtained.
In one embodiment, as shown in fig. 5, acquiring a depth image by a depth camera module includes:
step 501, identifying an object on the depth image, and acquiring an identification confidence of the object;
step 502, comparing the recognition confidence coefficient with a set threshold value, and acquiring a difference value;
and 503, if the difference between the recognition confidence and the set threshold meets a preset condition, performing optical zooming and/or digital zooming on the object to acquire the depth image for the second time.
In this embodiment, because the depth camera module determines the distance between each object in the image and the camera module by shooting, it is particularly important for the depth camera module to accurately identify whether the object in the image is used. When an object in an image is identified, if the object in the image is distorted seriously or the picture occupation ratio is too small, it is often difficult to accurately identify the object. Acquiring a depth image through a depth camera module, calculating the similarity between an object picture and an actual object picture in the depth image, and determining the maximum similarity; and calculating the recognition confidence coefficient of the object to be detected according to the maximum similarity. And then judging whether the recognition confidence of the object to be detected is smaller than a set threshold value. If the object is recognized and the recognition confidence coefficient is smaller than the set threshold value, adjusting the optical focal length of the depth camera module to shoot the color image again or perform digital zooming on the existing color image, and performing object recognition on the color image obtained for the second time again until the recognition confidence coefficient is larger than the set threshold value, and outputting the depth image of the object; if the object recognition confidence coefficient is greater than or equal to the set threshold value, directly outputting the depth image of the object if the object recognition confidence coefficient is greater than or equal to the set threshold value; it should be noted that if the object is not recognized, the optical focal length of the optical camera module is adjusted to shoot the color image again. The set threshold of the recognition confidence is determined by an engineer in software design according to hardware conditions and specific conditions of the software design. And are not limited herein.
In one embodiment, the depth camera module optically and/or digitally zooms an object, comprising: the optical zoom and/or the digital zoom is performed at a preset brightness.
In one embodiment, performing the optical zoom and the digital zoom at the preset brightness includes: performing digital zooming at a first brightness; performing optical zooming until a predetermined ratio of the maximum optical zooming is reached and performing digital zooming at the second brightness; and performing optical zooming until the maximum optical zooming is reached at a third brightness, and performing digital zooming, wherein the first brightness, the second brightness and the third brightness are defined by preset brightness.
It is noted that the preset luminance is set by the limit value of the illumination area light, and the limit value of the illumination area light is defined as the luminance first luminance of 1-1001 ux; defining the brightness of the light of the illumination area with the limit value of 100 and 10001ux as a second brightness; a brightness with a limit value of light of the illumination area larger than 10001ux is defined as a third brightness. The skilled person will understand that the selected illumination level is used for exemplary purposes.
At a first brightness, direct digital zoom or optical zoom. Digital zoom provides better illumination to the final depth image than using optical zoom, using only digital zoom up to a reasonable zoom level. However, if zooming is required at a reasonable level, further optical zooming is used. In other words, the digital zoom has a higher priority than the optical zoom at the first brightness because the light level reduction due to the optical magnification is more noticeable than the light level reduction due to the use of the digital zoom.
At the second brightness, optical zooming can be used first, after which digital zooming is performed if zooming is further required. The amount of camera module optical zoom may vary, but by way of example, optical zoom may be used until it reaches approximately half its maximum value or some other predetermined proportion. As mentioned, the ratio is a predetermined amount, which may be any value of 40%, 50%, 60%, 70%, 80%, 90%, or 30% -100% depending on the function of the image forming apparatus, and is not limited herein. In other words, in the second brightness condition, some amount of optical zoom may be used, but in order to avoid losing too much light, a part is done by digital zoom.
At a third brightness, optical zooming may be used, for example, up to its maximum. After the maximum optical zoom is reached and depending on whether further zooming is required, digital zooming may be performed. In other words, in bright light conditions, optical zoom can be tolerated to reduce the level of light reaching the optical image sensor, overcoming the disadvantage of digital zoom, i.e. not using all pixels in the final image quality.
Fig. 6 is a schematic structural diagram of an image processing apparatus provided in an embodiment, and an embodiment of the present application further provides a camera module calibration apparatus, which is applied to an electronic device having a first camera module, a second camera module, and a depth camera module, and is characterized by including:
the image obtaining module 601 is used for obtaining a first image through a first camera module, obtaining a second image of a scene through a second camera module, and obtaining a depth image through a depth camera module in the same scene; the user selects a scene chart1, the electronic device utilizes the first camera module, the second camera module and the depth camera module to shoot a chart1 at the same angle, the first camera module shoots a chart1 to obtain a first image, the second camera module shoots a chart1 to obtain a second image, and the depth camera module shoots a chart1 to obtain a depth image. The first camera module 111 and the second camera module 112 acquire RGB images using RGB modules. The depth camera module 113 is a Time of flight (TOF) camera or a structured light camera. The structured light camera projects controllable light spots, light bars or light surface structures to the surface of the measured object; and receives reflected light of a controllable light spot, light bar or smooth structure, and obtains a depth image according to the deformation amount of the emitted light. The TOF camera transmits near infrared light to a scene; receiving the reflected near infrared rays, and acquiring depth information of a scene by calculating the time difference or phase difference of the reflected near infrared rays; and representing the outline of the scene with different colors for different distances to acquire a depth image.
The first obtaining module 602 is configured to extract the same pixel points of the first image and the second image, determine parallax information, and calculate a first depth according to the parallax information; image recognition is a process of classification that distinguishes images from other different classes of images. Extracting pixel points of a first image and a second image by using a Scale-invariant feature transform (SIFT) method or a Speeded Up Robust Features (SURF) method, matching the pixel points extracted from the first image with the pixel points extracted from the second image by using a stereo matching algorithm to obtain a matched pixel point image, acquiring parallax information of a scene chart1, and converting the parallax information into depth information by calculating according to a triangulation distance measuring principle.
The SIFT is an algorithm of machine vision, which is used for detecting and describing local features in an image, and searches extreme points in a spatial scale and extracts invariant positions, scales and rotations of the extreme points, and the application range of the SIFT comprises object identification, robot map perception and navigation, image stitching, 3D model establishment, gesture identification, image tracking and action comparison.
SURF is a feature point extraction algorithm proposed by H Bay following SIFT algorithm, and the method performs block feature matching on the basis of SURF using an image integration technology, so that the calculation speed is further accelerated; meanwhile, a feature descriptor generated based on a second-order multi-scale template is used, and the robustness of feature point matching is improved.
The stereo matching algorithm is one of the most active research subjects in the field of computer vision, and the process is as follows: firstly, calculating matching cost, namely calculating IR (P) of each pixel point on a reference Image, matching the cost value of a corresponding point IT (pd) on a target Image by using all parallax possibilities, and storing the calculated cost value in a three-dimensional array, wherein the three-dimensional array is generally called a parallax Space Image (DSI); and then, cost aggregation, namely aggregating the matching costs in a support window by summing, averaging or other methods to obtain an accumulated cost CA (p, d) of a point p on the reference image at the parallax d, and reducing the influence of abnormal points and improving the signal-to-noise Ratio (SNR) by matching cost aggregation so as to improve the matching precision. And secondly, parallax calculation is carried out, namely a 'winner is king' strategy (WTA, WinnerTakeall) is adopted, namely a point with the optimal accumulated cost is selected in a parallax search range to serve as a corresponding matching point, and the corresponding parallax is the required parallax. And finally, respectively taking the left and right images as reference images, obtaining left and right parallax images after the three steps are finished, optimizing the parallax images, and correcting the parallax images by further executing a post-processing step. The commonly used method includes Interpolation (Interpolation), Sub-pixel Enhancement (Sub-pixel Enhancement), Refinement (Refinement), Image Filtering (Image Filtering), and the like, and the specific steps of the Interpolation are not described herein again.
The triangulation principle is the most common optical three-dimensional measurement technology, and based on traditional triangulation, the depth information of a point to be measured is calculated through angle change generated by deviation of the point relative to an optical reference line.
A second obtaining module 602, configured to determine a second depth according to the depth image; the depth image is used for describing depth information of a scene; the outline of the scene is represented in different colors for different distances, i.e. the second depth is described by the color of the depth map. The depth camera module can be a TOF camera or a structured light camera. The structured light camera projects controllable light spots, light bars or light surface structures to the surface of the measured object; and receives reflected light of a controllable light spot, light bar or smooth structure, and obtains a depth image according to the deformation amount of the emitted light. The TOF camera transmits near infrared light to a scene; receiving the reflected near infrared rays, and acquiring depth information of a scene by calculating the time difference or phase difference of the reflected near infrared rays; and representing the outline of the scene with different colors for different distances to acquire a depth image.
The calibration test module 603 is configured to perform depth processing on the first depth and the second depth, and if a difference between the first depth and the second depth is smaller than a preset threshold, generate a calibration test passing prompt signal.
The method comprises the steps of calculating a first depth according to parallax information determined by a first image and a second image, obtaining a second depth from a depth image, further obtaining a difference absolute value of the first depth and the second depth, and comparing the difference absolute value with a preset threshold, wherein the preset threshold is set by an engineer in a calibration process of a camera module, and is not limited herein, and the setting of the preset threshold is determined according to specific conditions. If the absolute value of the depth is smaller than the preset threshold value, the actual error of the calibration result of the camera module is within the error allowable range, a calibration test passing prompt signal is further generated, the prompt signal is used for prompting a calibration test processing unit of the electronic equipment, and the calibration result of the camera module passes the calibration test.
It should be understood that although the various steps in the flowcharts of fig. 2-6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-5 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
Fig. 7 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 6, the electronic device includes a processor and a memory connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor to implement a camera module calibration method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device, etc.
The modules in the camera module calibration device provided in the embodiment of the present application may be implemented in the form of a computer program. The computer program may be run on a terminal or a server. The program modules constituted by the computer program may be stored on the memory of the terminal or the server. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The embodiment of the application also provides the electronic equipment. The electronic device comprises a first camera module, a second camera module, a depth camera module, a memory and a processor, wherein the memory stores computer readable instructions, and when the instructions are executed by the processor, the processor executes the camera module calibration method in any of the embodiments. Included in the electronic device is an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units that define an ISP (Image Signal Processing) pipeline. FIG. 8 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 8, for convenience of explanation, only aspects of the image compensation technique related to the embodiments of the present application are shown.
As shown in fig. 8, the image processing circuit includes a first ISP processor 830, a second ISP processor 840 and a control logic 850. The first camera module 810 includes one or more first lenses 812 and a first image sensor 814. The first image sensor 814 may include a color filter array (e.g., a Bayer filter), and the first image sensor 814 may acquire light intensity and wavelength information captured with each imaging pixel of the first image sensor 814 and provide a set of image data that may be processed by the first ISP processor 830. The second camera module 820 includes one or more second lenses 822 and a second image sensor 824. The second image sensor 824 may include a color filter array (e.g., a Bayer filter), and the second image sensor 824 may acquire light intensity and wavelength information captured with each imaging pixel of the second image sensor 824 and provide a set of image data that may be processed by the second ISP processor 840.
The first image acquired by the first camera module 810 is transmitted to the first ISP processor 830 for processing, after the first ISP processor 830 processes the first image, the statistical data (such as the brightness of the image, the contrast value of the image, the color of the image, etc.) of the first image can be sent to the control logic 850, and the control logic 850 can determine the control parameters of the first camera module 810 according to the statistical data, so that the first camera module 810 can perform operations such as auto-focus and auto-exposure according to the control parameters. The first image may be stored in the image memory 860 after being processed by the first ISP processor 830, and the first ISP processor 830 may also read the image stored in the image memory 860 to process the image. In addition, the first image may be directly transmitted to the display 880 for display after being processed by the ISP processor 830, and the display 880 may also read the image in the image memory 860 for display.
Wherein the first ISP processor 830 processes the image data pixel by pixel in a plurality of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the first ISP processor 830 may perform one or more image processing operations on the image data, collecting statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The image memory 860 may be part of a memory device, a storage device, or a separate dedicated memory within an electronic device, and may include a DMA (Direct memory access) feature.
Upon receiving an interface from the first image sensor 814, the first ISP processor 830 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 860 for additional processing before being displayed. The first ISP processor 830 receives the processed data from the image memory 860 and performs image data processing in RGB and YCbCr color spaces on the processed data. The image data processed by the first ISP processor 830 may be output to a display 880 for viewing by a user and/or further Processing by a Graphics Processing Unit (GPU). Further, the output of the first ISP processor 830 may also be sent to an image memory 860, and the display 880 may read image data from the image memory 860. In one embodiment, image memory 860 may be configured to implement one or more frame buffers.
The statistics determined by the first ISP processor 830 may be sent to the control logic 850. For example, the statistical data may include first image sensor 814 statistical information such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, shading correction for first lens 812, and the like. The control logic 850 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that determine control parameters of the first camera module 810 and control parameters of the first ISP processor 830 based on the received statistical data. For example, the control parameters of the first camera module 810 may include gain, integration time of exposure control, anti-shake parameters, flash control parameters, first lens 812 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as first lens 812 shading correction parameters.
Similarly, the second image captured by the second camera module 820 is transmitted to the second ISP processor 840 for processing, after the second ISP processor 840 processes the first image, the statistical data of the second image (such as the brightness of the image, the contrast value of the image, the color of the image, etc.) may be sent to the control logic 850, and the control logic 850 may determine the control parameters of the second camera module 820 according to the statistical data, so that the second camera module 820 may perform operations such as auto-focus and auto-exposure according to the control parameters. The second image may be stored in the image memory 860 after being processed by the second ISP processor 840, and the second ISP processor 840 may also read the image stored in the image memory 860 to perform processing. In addition, the second image may be directly transmitted to the display 880 for display after being processed by the ISP processor 840, and the display 880 may also read the image in the image memory 860 for display. The second camera module 820 and the second ISP processor 840 may also implement the processes described for the first camera module 810 and the first ISP processor 830.
In the embodiment of the present application, the image processing technology in fig. 8 is used to implement the steps of the camera module calibration method:
under the same scene, a first image of the scene is obtained through a first camera module, a second image of the scene is obtained through a second camera module, and a depth image of the scene is obtained through a depth camera module;
extracting the same pixel points of the first image and the second image, acquiring parallax information, and calculating a first depth through the parallax information;
determining a second depth from the depth image;
and comparing the first depth with the second depth, and if the difference value of the first depth and the second depth is smaller than a preset threshold value, generating a prompt signal that the calibration test is passed.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the camera module calibration method.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform a camera module calibration method.
Any reference to memory, storage, database, or other medium used by embodiments of the present application may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features. It should be noted that "one embodiment," "for example," and the like in the present application are intended to illustrate the present application, and are not intended to limit the present application.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.