[go: up one dir, main page]

CN109712192B - Camera module calibration method and device, electronic equipment and computer readable storage medium - Google Patents

Camera module calibration method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN109712192B
CN109712192B CN201811455865.1A CN201811455865A CN109712192B CN 109712192 B CN109712192 B CN 109712192B CN 201811455865 A CN201811455865 A CN 201811455865A CN 109712192 B CN109712192 B CN 109712192B
Authority
CN
China
Prior art keywords
depth
camera module
image
shake
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201811455865.1A
Other languages
Chinese (zh)
Other versions
CN109712192A (en
Inventor
方攀
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201811455865.1A priority Critical patent/CN109712192B/en
Publication of CN109712192A publication Critical patent/CN109712192A/en
Application granted granted Critical
Publication of CN109712192B publication Critical patent/CN109712192B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Studio Devices (AREA)

Abstract

本发明涉及一种摄像模组标定方法、装置、电子设备及计算机可读存储介质,摄像模组标定方法应用于具有第一摄像模组、第二摄像模组和深度摄像模组的电子设备,在同一场景下,通过第一摄像模组获取第一图像,通过第二摄像模组拍获取场景的第二图像,通过深度摄像模组获取深度图像;提取第一图像和第二图像的相同像素点,获取视差信息,并通过视差信息计算第一深度;根据深度图像确定第二深度;将第一深度和第二深度进行比较,若第一深度和第二深度的差值小于预设阈值,则生成标定测试通过提示信号。通过对标定摄像模组拍摄图像的深度信息进行比对,以检验摄像模组标定结果是否合格,从而提高摄像模组标定准确性。

Figure 201811455865

The invention relates to a camera module calibration method, device, electronic equipment and a computer-readable storage medium. The camera module calibration method is applied to an electronic device having a first camera module, a second camera module and a depth camera module. In the same scene, the first image is obtained through the first camera module, the second image of the scene is obtained through the second camera module, and the depth image is obtained through the depth camera module; the same pixels of the first image and the second image are extracted point, obtain parallax information, and calculate the first depth through the parallax information; determine the second depth according to the depth image; compare the first depth and the second depth, if the difference between the first depth and the second depth is less than the preset threshold, Then a calibration test pass prompt signal is generated. By comparing the depth information of the images captured by the calibration camera module to check whether the calibration result of the camera module is qualified, thereby improving the calibration accuracy of the camera module.

Figure 201811455865

Description

Camera module calibration method and device, electronic equipment and computer readable storage medium
Technical Field
The present invention relates to the field of image technologies, and in particular, to a method and an apparatus for calibrating a camera module, an electronic device, and a computer-readable storage medium.
Background
Before the camera leaves the factory, the camera needs to be calibrated to obtain calibration parameters of the camera, and the calibration parameters of the camera are subjected to qualification test, so that the camera can process images according to the qualified calibration parameters, and the processed images can restore objects in a three-dimensional space. However, in the use process of the camera, different shooting conditions can affect the imaging effect of the image, and the problem of low calibration accuracy of the camera exists.
Disclosure of Invention
Accordingly, it is desirable to provide a camera module calibration method, device, electronic device and computer-readable storage medium for solving the problem of low camera module calibration accuracy.
The utility model provides a camera module calibration method, is applied to the electronic equipment that has first camera module, second camera module and degree of depth camera module, includes:
under the same scene, a first image of the scene is obtained through a first camera module, a second image of the scene is obtained through a second camera module, and a depth image of the scene is obtained through a depth camera module;
extracting the same pixel points of the first image and the second image, obtaining parallax information, and calculating a first depth according to the parallax information;
determining a second depth from the depth image;
and comparing the first depth with the second depth, and if the difference value of the first depth and the second depth is smaller than a preset threshold value, generating a prompt signal that the calibration test is passed.
The utility model provides a module calibration device makes a video recording, includes:
the image acquisition module is used for acquiring a first image through the first camera module, acquiring a second image of the scene through the second camera module and acquiring a depth image through the depth camera module in the same scene;
the first acquisition module is used for extracting the same pixel points of the first image and the second image, confirming parallax information and calculating a first depth according to the parallax information;
the second acquisition module is used for determining a second depth according to the depth image;
and the test unit is used for carrying out depth on the first depth and the second depth, and if the difference value of the first depth and the second depth is smaller than a preset threshold value, a prompt signal that the calibration test is passed is generated.
An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of:
under the same scene, a first image of the scene is obtained through a first camera module, a second image of the scene is obtained through a second camera module, and a depth image of the scene is obtained through a depth camera module;
extracting the same pixel points of the first image and the second image, obtaining parallax information, and calculating a first depth according to the parallax information;
determining a second depth from the depth image;
and comparing the first depth with the second depth, and if the difference value of the first depth and the second depth is smaller than a preset threshold value, generating a prompt signal that the calibration test is passed.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
under the same scene, a first image of the scene is obtained through a first camera module, a second image of the scene is obtained through a second camera module, and a depth image of the scene is obtained through a depth camera module;
extracting the same pixel points of the first image and the second image, obtaining parallax information, and calculating a first depth according to the parallax information;
determining a second depth from the depth image;
and comparing the first depth with the second depth, and if the difference value of the first depth and the second depth is smaller than a preset threshold value, generating a prompt signal that the calibration test is passed.
According to the camera module calibration method, the camera module calibration device, the electronic equipment and the computer-readable storage medium, in the same scene, a first image is obtained through the first camera module, a second image of the scene is obtained through the second camera module, and a depth image is obtained through the depth camera module; extracting the same pixel points of the first image and the second image, obtaining parallax information, and calculating a first depth according to the parallax information; determining a second depth from the depth image; and comparing the first depth with the second depth, and if the difference value of the first depth and the second depth is smaller than a preset threshold value, generating a prompt signal that the calibration test is passed. Through the degree of depth information to demarcating the module of making a video recording and obtaining the image and comparing to whether the module of making a video recording calibration result of inspection is qualified, thereby improves the module of making a video recording and demarcates the accuracy.
Drawings
Fig. 1a is a schematic view of an application environment of a camera module calibration method according to an embodiment of the present invention;
fig. 1b is a schematic view of an application environment of a camera module calibration method according to another embodiment of the present invention;
fig. 1c is a schematic view of an application environment of a camera module calibration method according to another embodiment of the present invention;
FIG. 2 is a flowchart of a camera module calibration method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating a camera module calibration method according to another embodiment of the present invention;
FIG. 4 is a flowchart illustrating a depth image acquisition process performed by the depth camera module according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a depth image captured by a depth camera module according to another embodiment of the present invention;
fig. 6 is a block diagram of a camera module calibration apparatus according to an embodiment of the present invention;
FIG. 7 is a block diagram of the internal structure of an electronic device in one embodiment of the invention;
FIG. 8 is a diagram of an image processing circuit according to an embodiment of the invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, and in order to provide a better understanding of the present invention. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. This invention can be embodied in many different forms than those herein described and many modifications may be made by those skilled in the art without departing from the spirit of the invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Fig. 1 is a schematic diagram of an application environment of the camera module calibration method in one embodiment. As shown in fig. 1, the application environment includes an electronic device 110 having a first camera module 111, a second camera module 112, and a depth camera module 113. The mechanical arrangement of the first camera module 111, the second camera module 112 and the depth camera module 113 may be: the first camera module 111, the second camera module 112, and the depth camera module 113 are arranged in sequence, as shown in fig. 1 a; or the first camera module 111, the depth camera module 113 and the second camera module 112 are arranged in sequence, as shown in fig. 1 b; or the second camera module 112, the first camera module 111, and the depth camera module 113 are arranged in sequence, as shown in fig. 1 c; or the second camera module 112, the depth camera module 113 and the first camera module 111 are arranged in sequence (not shown in the figure); or the depth camera module 113, the second camera module 112, and the first camera module 111 are arranged in sequence (not shown in the figure); or a depth camera module 113, a first camera module 111, and a second camera module 112 (not shown).
The first camera module 111 and the second camera module 112 are any camera modules in the prior art, and are not limited herein. For example, the first Camera module 111 and the second Camera module 112 may be visible light Camera modules (RGB cameras). The first camera module 111 and the second camera module 112 acquire RGB images using RGB modules. The depth camera module 113 is a Time of flight (TOF) camera or a structured light camera.
Fig. 2 is a flowchart of a camera module calibration method according to an embodiment of the present invention, and the camera module calibration method in this embodiment is described by taking the electronic device in fig. 1 as an example. As shown in fig. 2, the camera module calibration method includes steps 201 to 204.
Step 201, in the same scene, acquiring a first image through a first camera module, acquiring a second image of the scene through a second camera module, and acquiring a depth image through a depth camera module;
the user selects a scene chart1, the electronic device utilizes the first camera module, the second camera module and the depth camera module to shoot a chart1 at the same angle, the first camera module shoots a chart1 to obtain a first image, the second camera module shoots a chart1 to obtain a second image, and the depth camera module shoots a chart1 to obtain a depth image. The first camera module 111 and the second camera module 112 acquire RGB images using RGB modules. The depth camera module 113 is a Time of flight (TOF) camera or a structured light camera. The structured light camera projects controllable light spots, light bars or light surface structures to the surface of the measured object; and receives reflected light of a controllable light spot, light bar or smooth structure, and obtains a depth image according to the deformation amount of the emitted light. The TOF camera transmits near infrared light to a scene; receiving the reflected near infrared rays, and acquiring depth information of a scene by calculating the time difference or phase difference of the reflected near infrared rays; and representing the outline of the scene with different colors for different distances to acquire a depth image.
Step 202, a first obtaining module, configured to extract the same pixel points of the first image and the second image, determine parallax information, and calculate a first depth according to the parallax information;
image identification is a process of classification that distinguishes images from other different classes of images. Extracting pixel points of a first image and a second image by using a Scale-invariant feature transform (SIFT) method or a Speeded Up Robust Features (SURF) method, matching the pixel points extracted from the first image with the pixel points extracted from the second image by using a stereo matching algorithm to obtain a matched pixel point image, acquiring parallax information of a scene chart1, and converting the parallax information into depth information by calculating according to a triangulation distance measuring principle.
The SIFT is an algorithm of machine vision, which is used for detecting and describing local features in an image, and searches extreme points in a spatial scale and extracts invariant positions, scales and rotations of the extreme points, and the application range of the SIFT comprises object identification, robot map perception and navigation, image stitching, 3D model establishment, gesture identification, image tracking and action comparison.
SURF is a feature point extraction algorithm proposed by H Bay following SIFT algorithm, and the method performs block feature matching on the basis of SURF using an image integration technology, so that the calculation speed is further accelerated; meanwhile, a feature descriptor generated based on a second-order multi-scale template is used, and the robustness of feature point matching is improved.
The stereo matching algorithm is one of the most active research subjects in the field of computer vision, and the process is as follows: firstly, calculating matching cost, namely calculating IR (P) of each pixel point on a reference Image, matching the cost value of a corresponding point IT (pd) on a target Image by using all parallax possibilities, and storing the calculated cost value in a three-dimensional array, wherein the three-dimensional array is generally called a parallax Space Image (DSI); and then, cost aggregation, namely aggregating the matching costs in a support window by summing, averaging or other methods to obtain an accumulated cost CA (p, d) of a point p on the reference image at the parallax d, and reducing the influence of abnormal points and improving the signal-to-noise Ratio (SNR) by matching cost aggregation so as to improve the matching precision. And secondly, parallax calculation is carried out, namely a 'winner is king' strategy (WTA, WinnerTakeall) is adopted, namely a point with the optimal accumulated cost is selected in a parallax search range to serve as a corresponding matching point, and the corresponding parallax is the required parallax. And finally, respectively taking the left and right images as reference images, obtaining left and right parallax images after the three steps are finished, optimizing the parallax images, and correcting the parallax images by further executing a post-processing step. The commonly used method includes Interpolation (Interpolation), Sub-pixel Enhancement (Sub-pixel Enhancement), Refinement (Refinement), Image Filtering (Image Filtering), and the like, and the specific steps of the Interpolation are not described herein again.
The triangulation principle is the most common optical three-dimensional measurement technology, and based on traditional triangulation, the depth information of a point to be measured is calculated through angle change generated by deviation of the point relative to an optical reference line.
Step 203, determining a second depth according to the depth image; the depth image is used for describing depth information of a scene; the outline of the scene is represented in different colors for different distances, i.e. the second depth is described by the color of the depth map. The depth camera module in step 203 may be a TOF camera or a structured light camera. The structured light camera projects controllable light spots, light bars or light surface structures to the surface of the measured object; and receives reflected light of a controllable light spot, light bar or smooth structure, and obtains a depth image according to the deformation amount of the emitted light. The TOF camera transmits near infrared light to a scene; receiving the reflected near infrared rays, and acquiring depth information of a scene by calculating the time difference or phase difference of the reflected near infrared rays; and representing the outline of the scene with different colors for different distances to acquire a depth image.
And 204, comparing the first depth with the second depth, and if the difference value of the first depth and the second depth is smaller than a preset threshold value, generating a prompt signal that the calibration test is passed.
The method comprises the steps of calculating a first depth according to parallax information determined by a first image and a second image, obtaining a second depth from a depth image, further obtaining a difference absolute value of the first depth and the second depth, and comparing the difference absolute value with a preset threshold, wherein the preset threshold is set by an engineer in a calibration process of a camera module, and is not limited herein, and the setting of the preset threshold is determined according to specific conditions. If the absolute value of the depth is smaller than the preset threshold value, the actual error of the calibration result of the camera module is within the error allowable range, a calibration test passing prompt signal is further generated, the prompt signal is used for prompting a calibration test processing unit of the electronic equipment, and the calibration result of the camera module passes the calibration test.
In one embodiment, as shown in fig. 3, the method for calibrating a camera module further includes:
step 305, if the difference value between the first depth and the second depth is greater than or equal to a preset threshold value, generating a calibration test failure prompt signal; and calibrating the camera module for the second time.
The method comprises the steps of determining a first parallax in a parallax image, calculating a first parallax, obtaining a second depth from a depth image, further obtaining an absolute value of a difference value between the first depth and the second depth, and comparing the absolute value of the difference value with a preset threshold, wherein the preset threshold is set by an engineer in a calibration process of a camera module, and is not limited herein, and the setting of the preset threshold is determined according to specific conditions. If the absolute value of the parallax is larger than or equal to the preset threshold, it indicates that the actual error of the calibration result of the camera module exceeds the error allowable range, and further generates a calibration test passing failure prompt signal, the prompt signal is used for prompting a calibration test processing unit of the electronic equipment, and if the calibration result of the camera module does not pass the calibration test, the camera module needs to be calibrated for the second time. The calibration of the camera module is to restore an object in a space by using an image shot by the camera module, and the linear relation exists between the image shot by the camera and the object in a three-dimensional space, namely an image matrix is equal to a physical matrix, and the physical matrix can be regarded as a geometric model of camera imaging. The parameters in the physical matrix are camera parameters. The process of solving the parameters of the physical matrix is called camera calibration. The camera module calibration algorithm can be briefly described as follows: printing a template and attaching the template on a plane; shooting a plurality of template images from different angles; detecting characteristic points in the image; solving internal parameters and external parameters of the camera; solving distortion coefficients of the internal parameter and the external parameter; and optimizing distortion refinement.
In one embodiment, as shown in fig. 4, acquiring the depth image of the scene by the depth camera module includes: the degree of depth camera module is for having the anti-shake camera module of optical anti-shake OIS device, and the anti-shake camera module opens optical anti-shake function and obtains the degree of depth image. The optical anti-shake is to avoid or reduce the shake phenomenon of the camera or other similar imaging instruments in the process of capturing optical signals by arranging optical components, such as a lens, so as to improve the imaging quality. Optical anti-shake is the most accepted anti-shake technology by the public, and compensates the light path of the hand shake through a movable component, thereby realizing the effect of reducing the blur of the photo.
In one embodiment, the anti-shake camera module starts the optical anti-shake function and obtains the depth image, including: step 401, acquiring current jitter data of a depth camera module, wherein the jitter data comprises position change data; step 402, determining offset data of an anti-shake lens of the anti-shake camera module according to a relation between preset shake data and position change of the anti-shake lens; and step 403, adjusting the position of the anti-shake lens according to the anti-shake lens offset data, and acquiring the depth image for the second time.
In one embodiment, the anti-shake camera module starts the optical anti-shake function and obtains the depth image, including: step 401, acquiring current jitter data of a depth camera module, wherein the jitter data comprises position change data; the camera module can change position when shaking, and the shaking degree of the camera module is represented as shaking data by a quantized numerical value. The shake data includes position change data and angle change at the time of image pickup mold shake. And the jitter data may be detected by a jitter detection device. Step 402, determining offset data of an anti-shake camera lens of the anti-shake camera module according to a relation between preset shake data and position change of the anti-shake camera lens; the relationship between the preset jitter data and the anti-jitter position change of the anti-jitter lens is determined in advance according to the position relationship between a camera jitter detection device and the anti-jitter lens, the relationship between a jitter direction and an anti-jitter lens moving direction, and the relationship between a focal length, a jitter distance, a jitter angle and the anti-jitter lens moving distance. Through the relationship, the anti-shake movement data of the anti-shake lens corresponding to any shake data can be acquired. And step 403, adjusting the position of the anti-shake lens according to the anti-shake lens offset data, and acquiring the depth image for the second time. The anti-shake lens moving data comprise a moving direction and a moving distance, the position of the anti-shake lens is determined according to the moving data, and the anti-shake lens is adjusted to finish anti-shake compensation movement so that the image sensor can shoot images after anti-shake. Thereby, an anti-shake depth image can be obtained.
In one embodiment, as shown in fig. 5, acquiring a depth image by a depth camera module includes:
step 501, identifying an object on the depth image, and acquiring an identification confidence of the object;
step 502, comparing the recognition confidence coefficient with a set threshold value, and acquiring a difference value;
and 503, if the difference between the recognition confidence and the set threshold meets a preset condition, performing optical zooming and/or digital zooming on the object to acquire the depth image for the second time.
In this embodiment, because the depth camera module determines the distance between each object in the image and the camera module by shooting, it is particularly important for the depth camera module to accurately identify whether the object in the image is used. When an object in an image is identified, if the object in the image is distorted seriously or the picture occupation ratio is too small, it is often difficult to accurately identify the object. Acquiring a depth image through a depth camera module, calculating the similarity between an object picture and an actual object picture in the depth image, and determining the maximum similarity; and calculating the recognition confidence coefficient of the object to be detected according to the maximum similarity. And then judging whether the recognition confidence of the object to be detected is smaller than a set threshold value. If the object is recognized and the recognition confidence coefficient is smaller than the set threshold value, adjusting the optical focal length of the depth camera module to shoot the color image again or perform digital zooming on the existing color image, and performing object recognition on the color image obtained for the second time again until the recognition confidence coefficient is larger than the set threshold value, and outputting the depth image of the object; if the object recognition confidence coefficient is greater than or equal to the set threshold value, directly outputting the depth image of the object if the object recognition confidence coefficient is greater than or equal to the set threshold value; it should be noted that if the object is not recognized, the optical focal length of the optical camera module is adjusted to shoot the color image again. The set threshold of the recognition confidence is determined by an engineer in software design according to hardware conditions and specific conditions of the software design. And are not limited herein.
In one embodiment, the depth camera module optically and/or digitally zooms an object, comprising: the optical zoom and/or the digital zoom is performed at a preset brightness.
In one embodiment, performing the optical zoom and the digital zoom at the preset brightness includes: performing digital zooming at a first brightness; performing optical zooming until a predetermined ratio of the maximum optical zooming is reached and performing digital zooming at the second brightness; and performing optical zooming until the maximum optical zooming is reached at a third brightness, and performing digital zooming, wherein the first brightness, the second brightness and the third brightness are defined by preset brightness.
It is noted that the preset luminance is set by the limit value of the illumination area light, and the limit value of the illumination area light is defined as the luminance first luminance of 1-1001 ux; defining the brightness of the light of the illumination area with the limit value of 100 and 10001ux as a second brightness; a brightness with a limit value of light of the illumination area larger than 10001ux is defined as a third brightness. The skilled person will understand that the selected illumination level is used for exemplary purposes.
At a first brightness, direct digital zoom or optical zoom. Digital zoom provides better illumination to the final depth image than using optical zoom, using only digital zoom up to a reasonable zoom level. However, if zooming is required at a reasonable level, further optical zooming is used. In other words, the digital zoom has a higher priority than the optical zoom at the first brightness because the light level reduction due to the optical magnification is more noticeable than the light level reduction due to the use of the digital zoom.
At the second brightness, optical zooming can be used first, after which digital zooming is performed if zooming is further required. The amount of camera module optical zoom may vary, but by way of example, optical zoom may be used until it reaches approximately half its maximum value or some other predetermined proportion. As mentioned, the ratio is a predetermined amount, which may be any value of 40%, 50%, 60%, 70%, 80%, 90%, or 30% -100% depending on the function of the image forming apparatus, and is not limited herein. In other words, in the second brightness condition, some amount of optical zoom may be used, but in order to avoid losing too much light, a part is done by digital zoom.
At a third brightness, optical zooming may be used, for example, up to its maximum. After the maximum optical zoom is reached and depending on whether further zooming is required, digital zooming may be performed. In other words, in bright light conditions, optical zoom can be tolerated to reduce the level of light reaching the optical image sensor, overcoming the disadvantage of digital zoom, i.e. not using all pixels in the final image quality.
Fig. 6 is a schematic structural diagram of an image processing apparatus provided in an embodiment, and an embodiment of the present application further provides a camera module calibration apparatus, which is applied to an electronic device having a first camera module, a second camera module, and a depth camera module, and is characterized by including:
the image obtaining module 601 is used for obtaining a first image through a first camera module, obtaining a second image of a scene through a second camera module, and obtaining a depth image through a depth camera module in the same scene; the user selects a scene chart1, the electronic device utilizes the first camera module, the second camera module and the depth camera module to shoot a chart1 at the same angle, the first camera module shoots a chart1 to obtain a first image, the second camera module shoots a chart1 to obtain a second image, and the depth camera module shoots a chart1 to obtain a depth image. The first camera module 111 and the second camera module 112 acquire RGB images using RGB modules. The depth camera module 113 is a Time of flight (TOF) camera or a structured light camera. The structured light camera projects controllable light spots, light bars or light surface structures to the surface of the measured object; and receives reflected light of a controllable light spot, light bar or smooth structure, and obtains a depth image according to the deformation amount of the emitted light. The TOF camera transmits near infrared light to a scene; receiving the reflected near infrared rays, and acquiring depth information of a scene by calculating the time difference or phase difference of the reflected near infrared rays; and representing the outline of the scene with different colors for different distances to acquire a depth image.
The first obtaining module 602 is configured to extract the same pixel points of the first image and the second image, determine parallax information, and calculate a first depth according to the parallax information; image recognition is a process of classification that distinguishes images from other different classes of images. Extracting pixel points of a first image and a second image by using a Scale-invariant feature transform (SIFT) method or a Speeded Up Robust Features (SURF) method, matching the pixel points extracted from the first image with the pixel points extracted from the second image by using a stereo matching algorithm to obtain a matched pixel point image, acquiring parallax information of a scene chart1, and converting the parallax information into depth information by calculating according to a triangulation distance measuring principle.
The SIFT is an algorithm of machine vision, which is used for detecting and describing local features in an image, and searches extreme points in a spatial scale and extracts invariant positions, scales and rotations of the extreme points, and the application range of the SIFT comprises object identification, robot map perception and navigation, image stitching, 3D model establishment, gesture identification, image tracking and action comparison.
SURF is a feature point extraction algorithm proposed by H Bay following SIFT algorithm, and the method performs block feature matching on the basis of SURF using an image integration technology, so that the calculation speed is further accelerated; meanwhile, a feature descriptor generated based on a second-order multi-scale template is used, and the robustness of feature point matching is improved.
The stereo matching algorithm is one of the most active research subjects in the field of computer vision, and the process is as follows: firstly, calculating matching cost, namely calculating IR (P) of each pixel point on a reference Image, matching the cost value of a corresponding point IT (pd) on a target Image by using all parallax possibilities, and storing the calculated cost value in a three-dimensional array, wherein the three-dimensional array is generally called a parallax Space Image (DSI); and then, cost aggregation, namely aggregating the matching costs in a support window by summing, averaging or other methods to obtain an accumulated cost CA (p, d) of a point p on the reference image at the parallax d, and reducing the influence of abnormal points and improving the signal-to-noise Ratio (SNR) by matching cost aggregation so as to improve the matching precision. And secondly, parallax calculation is carried out, namely a 'winner is king' strategy (WTA, WinnerTakeall) is adopted, namely a point with the optimal accumulated cost is selected in a parallax search range to serve as a corresponding matching point, and the corresponding parallax is the required parallax. And finally, respectively taking the left and right images as reference images, obtaining left and right parallax images after the three steps are finished, optimizing the parallax images, and correcting the parallax images by further executing a post-processing step. The commonly used method includes Interpolation (Interpolation), Sub-pixel Enhancement (Sub-pixel Enhancement), Refinement (Refinement), Image Filtering (Image Filtering), and the like, and the specific steps of the Interpolation are not described herein again.
The triangulation principle is the most common optical three-dimensional measurement technology, and based on traditional triangulation, the depth information of a point to be measured is calculated through angle change generated by deviation of the point relative to an optical reference line.
A second obtaining module 602, configured to determine a second depth according to the depth image; the depth image is used for describing depth information of a scene; the outline of the scene is represented in different colors for different distances, i.e. the second depth is described by the color of the depth map. The depth camera module can be a TOF camera or a structured light camera. The structured light camera projects controllable light spots, light bars or light surface structures to the surface of the measured object; and receives reflected light of a controllable light spot, light bar or smooth structure, and obtains a depth image according to the deformation amount of the emitted light. The TOF camera transmits near infrared light to a scene; receiving the reflected near infrared rays, and acquiring depth information of a scene by calculating the time difference or phase difference of the reflected near infrared rays; and representing the outline of the scene with different colors for different distances to acquire a depth image.
The calibration test module 603 is configured to perform depth processing on the first depth and the second depth, and if a difference between the first depth and the second depth is smaller than a preset threshold, generate a calibration test passing prompt signal.
The method comprises the steps of calculating a first depth according to parallax information determined by a first image and a second image, obtaining a second depth from a depth image, further obtaining a difference absolute value of the first depth and the second depth, and comparing the difference absolute value with a preset threshold, wherein the preset threshold is set by an engineer in a calibration process of a camera module, and is not limited herein, and the setting of the preset threshold is determined according to specific conditions. If the absolute value of the depth is smaller than the preset threshold value, the actual error of the calibration result of the camera module is within the error allowable range, a calibration test passing prompt signal is further generated, the prompt signal is used for prompting a calibration test processing unit of the electronic equipment, and the calibration result of the camera module passes the calibration test.
It should be understood that although the various steps in the flowcharts of fig. 2-6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-5 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
Fig. 7 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 6, the electronic device includes a processor and a memory connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor to implement a camera module calibration method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device, etc.
The modules in the camera module calibration device provided in the embodiment of the present application may be implemented in the form of a computer program. The computer program may be run on a terminal or a server. The program modules constituted by the computer program may be stored on the memory of the terminal or the server. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The embodiment of the application also provides the electronic equipment. The electronic device comprises a first camera module, a second camera module, a depth camera module, a memory and a processor, wherein the memory stores computer readable instructions, and when the instructions are executed by the processor, the processor executes the camera module calibration method in any of the embodiments. Included in the electronic device is an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units that define an ISP (Image Signal Processing) pipeline. FIG. 8 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 8, for convenience of explanation, only aspects of the image compensation technique related to the embodiments of the present application are shown.
As shown in fig. 8, the image processing circuit includes a first ISP processor 830, a second ISP processor 840 and a control logic 850. The first camera module 810 includes one or more first lenses 812 and a first image sensor 814. The first image sensor 814 may include a color filter array (e.g., a Bayer filter), and the first image sensor 814 may acquire light intensity and wavelength information captured with each imaging pixel of the first image sensor 814 and provide a set of image data that may be processed by the first ISP processor 830. The second camera module 820 includes one or more second lenses 822 and a second image sensor 824. The second image sensor 824 may include a color filter array (e.g., a Bayer filter), and the second image sensor 824 may acquire light intensity and wavelength information captured with each imaging pixel of the second image sensor 824 and provide a set of image data that may be processed by the second ISP processor 840.
The first image acquired by the first camera module 810 is transmitted to the first ISP processor 830 for processing, after the first ISP processor 830 processes the first image, the statistical data (such as the brightness of the image, the contrast value of the image, the color of the image, etc.) of the first image can be sent to the control logic 850, and the control logic 850 can determine the control parameters of the first camera module 810 according to the statistical data, so that the first camera module 810 can perform operations such as auto-focus and auto-exposure according to the control parameters. The first image may be stored in the image memory 860 after being processed by the first ISP processor 830, and the first ISP processor 830 may also read the image stored in the image memory 860 to process the image. In addition, the first image may be directly transmitted to the display 880 for display after being processed by the ISP processor 830, and the display 880 may also read the image in the image memory 860 for display.
Wherein the first ISP processor 830 processes the image data pixel by pixel in a plurality of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the first ISP processor 830 may perform one or more image processing operations on the image data, collecting statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The image memory 860 may be part of a memory device, a storage device, or a separate dedicated memory within an electronic device, and may include a DMA (Direct memory access) feature.
Upon receiving an interface from the first image sensor 814, the first ISP processor 830 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 860 for additional processing before being displayed. The first ISP processor 830 receives the processed data from the image memory 860 and performs image data processing in RGB and YCbCr color spaces on the processed data. The image data processed by the first ISP processor 830 may be output to a display 880 for viewing by a user and/or further Processing by a Graphics Processing Unit (GPU). Further, the output of the first ISP processor 830 may also be sent to an image memory 860, and the display 880 may read image data from the image memory 860. In one embodiment, image memory 860 may be configured to implement one or more frame buffers.
The statistics determined by the first ISP processor 830 may be sent to the control logic 850. For example, the statistical data may include first image sensor 814 statistical information such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, shading correction for first lens 812, and the like. The control logic 850 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that determine control parameters of the first camera module 810 and control parameters of the first ISP processor 830 based on the received statistical data. For example, the control parameters of the first camera module 810 may include gain, integration time of exposure control, anti-shake parameters, flash control parameters, first lens 812 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as first lens 812 shading correction parameters.
Similarly, the second image captured by the second camera module 820 is transmitted to the second ISP processor 840 for processing, after the second ISP processor 840 processes the first image, the statistical data of the second image (such as the brightness of the image, the contrast value of the image, the color of the image, etc.) may be sent to the control logic 850, and the control logic 850 may determine the control parameters of the second camera module 820 according to the statistical data, so that the second camera module 820 may perform operations such as auto-focus and auto-exposure according to the control parameters. The second image may be stored in the image memory 860 after being processed by the second ISP processor 840, and the second ISP processor 840 may also read the image stored in the image memory 860 to perform processing. In addition, the second image may be directly transmitted to the display 880 for display after being processed by the ISP processor 840, and the display 880 may also read the image in the image memory 860 for display. The second camera module 820 and the second ISP processor 840 may also implement the processes described for the first camera module 810 and the first ISP processor 830.
In the embodiment of the present application, the image processing technology in fig. 8 is used to implement the steps of the camera module calibration method:
under the same scene, a first image of the scene is obtained through a first camera module, a second image of the scene is obtained through a second camera module, and a depth image of the scene is obtained through a depth camera module;
extracting the same pixel points of the first image and the second image, acquiring parallax information, and calculating a first depth through the parallax information;
determining a second depth from the depth image;
and comparing the first depth with the second depth, and if the difference value of the first depth and the second depth is smaller than a preset threshold value, generating a prompt signal that the calibration test is passed.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the camera module calibration method.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform a camera module calibration method.
Any reference to memory, storage, database, or other medium used by embodiments of the present application may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features. It should be noted that "one embodiment," "for example," and the like in the present application are intended to illustrate the present application, and are not intended to limit the present application.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1.一种摄像模组标定方法,应用于具有第一摄像模组、第二摄像模组和深度摄像模组的电子设备,其特征在于,包括:1. a camera module calibration method, applied to the electronic equipment with the first camera module, the second camera module and the depth camera module, is characterized in that, comprising: 在同一场景下,通过第一摄像模组获取所述场景的第一图像,通过第二摄像模组获取所述场景的第二图像,通过深度摄像模组获取所述场景的深度图像;In the same scene, the first image of the scene is obtained through the first camera module, the second image of the scene is obtained through the second camera module, and the depth image of the scene is obtained through the depth camera module; 提取所述第一图像和所述第二图像的相同像素点,获取视差信息,并通过所述视差信息计算第一深度;Extracting the same pixel points of the first image and the second image, obtaining parallax information, and calculating a first depth through the parallax information; 根据所述深度图像确定第二深度;determining a second depth from the depth image; 将所述第一深度和所述第二深度进行比较,若所述第一深度和所述第二深度的差值绝对值小于预设阈值,则生成标定测试通过提示信号。The first depth and the second depth are compared, and if the absolute value of the difference between the first depth and the second depth is less than a preset threshold, a calibration test passing prompt signal is generated. 2.根据权利要求1所述的摄像模组标定方法,其特征在于,所述方法还包括:2. The camera module calibration method according to claim 1, wherein the method further comprises: 若所述第一深度和所述第二深度的差值绝对值大于或等于预设阈值,则生成标定测试失败提示信号;第二次标定所述摄像模组。If the absolute value of the difference between the first depth and the second depth is greater than or equal to a preset threshold, a calibration test failure prompt signal is generated; the camera module is calibrated a second time. 3.根据权利要求1或2所述的摄像模组标定方法,其特征在于,所述通过深度摄像模组获取所述场景的深度图像,包括:3. camera module calibration method according to claim 1 and 2, is characterized in that, described obtaining the depth image of described scene by depth camera module, comprises: 所述深度摄像模组为具有光学防抖OIS器件的防抖摄像模组,所述防抖摄像模组开启光学防抖功能并获取深度图像。The depth camera module is an anti-shake camera module with an optical anti-shake OIS device, and the anti-shake camera module enables an optical anti-shake function and acquires a depth image. 4.根据权利要求3所述的摄像模组标定方法,其特征在于,所述防抖摄像模组开启光学防抖功能并获取深度图像,包括:4. The camera module calibration method according to claim 3, wherein the anti-shake camera module enables an optical anti-shake function and obtains a depth image, comprising: 获取所述深度摄像模组当前的抖动数据,其中,所述抖动数据包括位置变化数据;obtaining the current shaking data of the depth camera module, wherein the shaking data includes position change data; 根据预设抖动数据与所述防抖镜头位置变化的关系,确定出所述防抖摄像模组防抖镜头的偏移数据;Determine the offset data of the anti-shake lens of the anti-shake camera module according to the relationship between the preset jitter data and the position change of the anti-shake lens; 根据所述防抖镜头偏移数据,调整所述防抖镜头的位置,并第二次获取深度图像。The position of the anti-shake lens is adjusted according to the offset data of the anti-shake lens, and a depth image is acquired for the second time. 5.根据权利要求1或2所述的摄像模组标定方法,其特征在于,所述通过深度摄像模组获取深度图像,包括:5. camera module calibration method according to claim 1 and 2, is characterized in that, described obtaining depth image through depth camera module, comprises: 识别所述深度图像上的物体,获取所述物体的识别置信度;Recognize the object on the depth image, and obtain the recognition confidence of the object; 比较所述识别置信度和设定阈值的大小,并获取差值;Comparing the identification confidence with the size of the set threshold, and obtaining the difference; 若所述识别置信度和设定阈值的差值满足预设条件,对所述物体进行光学变焦和/或数字变焦,以第二次获取深度图像。If the difference between the recognition confidence level and the set threshold satisfies a preset condition, perform optical zooming and/or digital zooming on the object to acquire a depth image for the second time. 6.根据权利要求5所述的摄像模组标定方法,其特征在于,所述对所述物体进行光学变焦和/或数字变焦,包括:按照预设亮度执行对所述物体的光学变焦和/或数字变焦。6. The camera module calibration method according to claim 5, wherein the performing optical zooming and/or digital zooming on the object comprises: performing optical zooming and/or optical zooming on the object according to preset brightness or digital zoom. 7.根据权利要求6所述的摄像模组标定方法,其特征在于,所述按照预设亮度执行光学变焦和/或数字变焦包括:7. The camera module calibration method according to claim 6, wherein the performing optical zooming and/or digital zooming according to preset brightness comprises: 第一亮度下,执行所述数字变焦或所述光学变焦;Under the first brightness, the digital zoom or the optical zoom is performed; 第二亮度下,执行所述光学变焦直到达到最大光学变焦的预定比例,并执行所述数字变焦;Under the second brightness, the optical zoom is performed until a predetermined ratio of the maximum optical zoom is reached, and the digital zoom is performed; 第三亮度下,执行所述光学变焦直到达到最大光学变焦,并执行所述数字变焦,其中,所述第一亮度、第二亮度和第三亮度由所述预设亮度定义。At a third brightness, the optical zoom is performed until the maximum optical zoom is reached, and the digital zoom is performed, wherein the first brightness, the second brightness and the third brightness are defined by the preset brightness. 8.一种摄像模组标定装置,其特征在于,包括:8. A camera module calibration device, characterized in that, comprising: 获取图像模块,用于在同一场景下,通过第一摄像模组获取第一图像,通过第二摄像模组获取所述场景的第二图像,通过深度摄像模组获取深度图像;an image acquisition module, configured to acquire a first image through a first camera module, acquire a second image of the scene through a second camera module, and acquire a depth image through a depth camera module in the same scene; 第一获取模块,用于提取所述第一图像和所述第二图像的相同像素点,确认出视差信息,并通过所述视差信息计算第一深度;a first acquisition module, configured to extract the same pixel points of the first image and the second image, confirm parallax information, and calculate a first depth through the parallax information; 第二获取模块,用于根据所述深度图像确定出第二深度;a second acquisition module, configured to determine a second depth according to the depth image; 标定测试模块,用于将所述第一深度和所述第二深度进行比较,若所述第一深度和所述第二深度的差值绝对值小于预设阈值,则生成标定测试通过提示信号。A calibration test module for comparing the first depth and the second depth, and if the absolute value of the difference between the first depth and the second depth is less than a preset threshold, a calibration test pass prompt signal is generated . 9.一种电子设备,包括摄像头模组、存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行如权利要求1至7中任一项所述的摄像模组标定方法的步骤。9. An electronic device, comprising a camera module, a memory, and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the processor is made to execute as claimed in claims 1 to 7 The steps of any one of the camera module calibration methods. 10.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7中任一项所述的方法的步骤。10. A computer-readable storage medium on which a computer program is stored, characterized in that, when the computer program is executed by a processor, the steps of the method according to any one of claims 1 to 7 are implemented.
CN201811455865.1A 2018-11-30 2018-11-30 Camera module calibration method and device, electronic equipment and computer readable storage medium Expired - Fee Related CN109712192B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811455865.1A CN109712192B (en) 2018-11-30 2018-11-30 Camera module calibration method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811455865.1A CN109712192B (en) 2018-11-30 2018-11-30 Camera module calibration method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN109712192A CN109712192A (en) 2019-05-03
CN109712192B true CN109712192B (en) 2021-03-23

Family

ID=66254456

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811455865.1A Expired - Fee Related CN109712192B (en) 2018-11-30 2018-11-30 Camera module calibration method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN109712192B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110278029B (en) * 2019-06-25 2020-12-22 Oppo广东移动通信有限公司 Data transmission control method and related product
CN110400273B (en) 2019-07-11 2022-03-22 Oppo广东移动通信有限公司 Depth data filtering method and device, electronic equipment and readable storage medium
CN112584129A (en) * 2019-09-30 2021-03-30 北京芯海视界三维科技有限公司 Method and device for realizing 3D shooting and display and 3D display terminal
CN112862880B (en) * 2019-11-12 2024-06-25 Oppo广东移动通信有限公司 Depth information acquisition method, device, electronic equipment and storage medium
CN114761825B (en) * 2019-12-16 2025-05-30 索尼半导体解决方案公司 Time-of-flight imaging circuit, time-of-flight imaging system, and time-of-flight imaging method
CN111458105A (en) * 2020-04-21 2020-07-28 欧菲微电子技术有限公司 Method, device and equipment for testing optical module
CN112188059B (en) * 2020-09-30 2022-07-15 深圳市商汤科技有限公司 Wearable device, intelligent guiding method and device and guiding system
CN112308929B (en) * 2020-10-28 2024-03-15 深圳市开成亿科技有限公司 Underwater camera shooting calibration method, underwater camera shooting calibration system and storage medium
CN114648440B (en) * 2020-12-18 2024-11-26 浙江舜宇智能光学技术有限公司 Processing method and FPGA chip for calibration information of camera module
CN114693794A (en) * 2020-12-25 2022-07-01 瑞芯微电子股份有限公司 Calibration method, depth imaging method, structured light module and complete machine
CN112911091B (en) * 2021-03-23 2023-02-24 维沃移动通信(杭州)有限公司 Parameter adjusting method and device of multipoint laser and electronic equipment
CN113473113B (en) * 2021-06-30 2023-07-28 展讯通信(天津)有限公司 Camera testing method, system and equipment
CN113838146B (en) * 2021-09-26 2024-10-18 昆山丘钛光电科技有限公司 Method and device for verifying calibration precision of camera module and testing camera module
CN113838151B (en) * 2021-10-15 2023-11-17 西安维沃软件技术有限公司 Camera calibration method, device, equipment and medium
CN114882117B (en) * 2022-04-28 2025-07-18 影石创新科技股份有限公司 Phase difference detection method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107967701A (en) * 2017-12-18 2018-04-27 信利光电股份有限公司 A kind of scaling method, device and the equipment of depth camera equipment
CN108010085A (en) * 2017-11-30 2018-05-08 西南科技大学 Target identification method based on binocular Visible Light Camera Yu thermal infrared camera
CN108734743A (en) * 2018-04-13 2018-11-02 深圳市商汤科技有限公司 Method, apparatus, medium and electronic equipment for demarcating photographic device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7417673B2 (en) * 2005-05-31 2008-08-26 Nokia Corporation Optical and digital zooming for an imaging device
US20120007954A1 (en) * 2010-07-08 2012-01-12 Texas Instruments Incorporated Method and apparatus for a disparity-based improvement of stereo camera calibration
CN102867304B (en) * 2012-09-04 2015-07-01 南京航空航天大学 Method for establishing relation between scene stereoscopic depth and vision difference in binocular stereoscopic vision system
US20170035268A1 (en) * 2015-08-07 2017-02-09 Ming Shi CO., LTD. Stereo display system and method for endoscope using shape-from-shading algorithm
CN105160663A (en) * 2015-08-24 2015-12-16 深圳奥比中光科技有限公司 Method and system for acquiring depth image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108010085A (en) * 2017-11-30 2018-05-08 西南科技大学 Target identification method based on binocular Visible Light Camera Yu thermal infrared camera
CN107967701A (en) * 2017-12-18 2018-04-27 信利光电股份有限公司 A kind of scaling method, device and the equipment of depth camera equipment
CN108734743A (en) * 2018-04-13 2018-11-02 深圳市商汤科技有限公司 Method, apparatus, medium and electronic equipment for demarcating photographic device

Also Published As

Publication number Publication date
CN109712192A (en) 2019-05-03

Similar Documents

Publication Publication Date Title
CN109712192B (en) Camera module calibration method and device, electronic equipment and computer readable storage medium
CN107948519B (en) Image processing method, device and equipment
CN109767467B (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
CN109089047B (en) Method and device for controlling focus, storage medium, and electronic device
CN109559353B (en) Camera module calibration method, device, electronic device, and computer-readable storage medium
CN108055452B (en) Image processing method, device and equipment
CN110610465B (en) Image correction method and device, electronic equipment and computer readable storage medium
EP3480784B1 (en) Image processing method, and device
CN107945105B (en) Background blur processing method, device and equipment
JP6663040B2 (en) Depth information acquisition method and apparatus, and image acquisition device
CN109685853B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN107948500A (en) Image processing method and device
CN109963080B (en) Image acquisition method, device, electronic device and computer storage medium
CN108053438B (en) Depth of field acquisition method, device and device
CN109584312B (en) Camera calibration method, apparatus, electronic device and computer-readable storage medium
CN112004029B (en) Exposure processing method, exposure processing device, electronic apparatus, and computer-readable storage medium
CN111246100B (en) Anti-shake parameter calibration method and device and electronic equipment
US12141947B2 (en) Image processing method, electronic device, and computer-readable storage medium
CN107948617B (en) Image processing method, apparatus, computer-readable storage medium, and computer device
CN110248101A (en) Focusing method and device, electronic equipment and computer readable storage medium
CN109559352B (en) Camera calibration method, apparatus, electronic device and computer-readable storage medium
US11218650B2 (en) Image processing method, electronic device, and computer-readable storage medium
CN112866553A (en) Focusing method and device, electronic equipment and computer readable storage medium
CN109584311B (en) Camera calibration method, apparatus, electronic device and computer-readable storage medium
CN109697737B (en) Camera calibration method, apparatus, electronic device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210323