CN107025666A - Single-camera-based depth detection method, device, and electronic device - Google Patents
Single-camera-based depth detection method, device, and electronic device Download PDFInfo
- Publication number
- CN107025666A CN107025666A CN201710138684.5A CN201710138684A CN107025666A CN 107025666 A CN107025666 A CN 107025666A CN 201710138684 A CN201710138684 A CN 201710138684A CN 107025666 A CN107025666 A CN 107025666A
- Authority
- CN
- China
- Prior art keywords
- camera
- image
- current scene
- depth detection
- electronic device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 67
- 238000000034 method Methods 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000001133 acceleration Effects 0.000 claims description 2
- 230000008859 change Effects 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 7
- 238000003491 array Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Landscapes
- Studio Devices (AREA)
Abstract
Description
技术领域technical field
本发明涉及成像技术,尤其涉及一种基于单摄像头的深度检测方法及装置和电子装置。The present invention relates to imaging technology, in particular to a depth detection method and device based on a single camera and an electronic device.
背景技术Background technique
基于双目立体视觉的深度检测方法中,要保证两个摄像头的规格以获得较为匹配的立体图像对,且要保证两个摄像头之间的相对位置固定,如此才能确保深度数据的准确度。但目前摄像头的生产制作难以保证两个摄像头的规格完全相同。此外,两个摄像头的相对位置可能因为摔落等原因而导致相对位置变化,从而无法得到较为准确的深度数据。In the depth detection method based on binocular stereo vision, it is necessary to ensure the specifications of the two cameras to obtain a relatively matching stereo image pair, and to ensure that the relative positions between the two cameras are fixed, so as to ensure the accuracy of the depth data. However, it is difficult to ensure that the specifications of the two cameras are exactly the same in the current production of the camera. In addition, the relative position of the two cameras may change due to reasons such as falling, so that more accurate depth data cannot be obtained.
发明内容Contents of the invention
本发明的实施例提供了一种基于单摄像头的深度检测方法及装置和电子装置。Embodiments of the present invention provide a method and device for depth detection based on a single camera, and an electronic device.
本发明实施方式的单个摄像头的深度检测方法。所述深度检测方法包括以下步骤:A depth detection method for a single camera according to an embodiment of the present invention. The depth detection method comprises the following steps:
控制所述摄像头在第一位置拍摄当前场景以获得第一图像;controlling the camera to capture the current scene at a first position to obtain a first image;
控制所述摄像头从所述第一位置沿垂直于所述摄像头的轴向的方向移动至第二位置后控制所述摄像头拍摄所述当前场景的第二图像;和controlling the camera to move from the first position to a second position in a direction perpendicular to the axial direction of the camera; and then controlling the camera to capture a second image of the current scene; and
处理所述第一图像和所述第二图像以得到所述当前场景的深度信息。Processing the first image and the second image to obtain depth information of the current scene.
本发明实施方式的单个摄像头的深度检测装置,用于电子装置。所述深度检测装置包括第一控制模块、第二控制模块和处理模块。所述第一控制模块用于控制所述摄像头在第一位置拍摄当前场景以获得第一图像;所述第二控制模块用于在所述摄像头从所述第一位置沿垂直于所述摄像头的轴向的方向移动至第二位置后控制所述摄像头拍摄所述当前场景的第二图像;所述处理模块用于处理所述第一图像和所述第二图像以得到所述当前场景的深度信息。The depth detection device of a single camera according to the embodiment of the present invention is used in an electronic device. The depth detection device includes a first control module, a second control module and a processing module. The first control module is used to control the camera to capture the current scene at a first position to obtain a first image; the second control module is used to control the camera from the first position along a direction perpendicular to the camera After the axial direction is moved to the second position, the camera is controlled to take a second image of the current scene; the processing module is used to process the first image and the second image to obtain the depth of the current scene information.
本发明实施方式的电子装置包括摄像头、运动传感器和上述的深度检测装置。所述深度检测装置与所述摄像头及所述运动传感器均电连接。An electronic device according to an embodiment of the present invention includes a camera, a motion sensor, and the above-mentioned depth detection device. The depth detection device is electrically connected to both the camera and the motion sensor.
本发明实施方式的基于单摄像头的深度检测方法、基于单摄像头的深度检测装置和电子装置基于可移动的单个摄像头进行图像拍摄,一方面可获得较为匹配的立体图像对,另一方面对于立体图像对拍摄时摄像头的相对位置的局限性较小,摄像头移动自由度大,可以避免因相对位置变化导致的深度检测不准确的问题。The single-camera-based depth detection method, the single-camera-based depth detection device, and the electronic device of the embodiments of the present invention perform image capture based on a movable single camera. On the one hand, relatively matching stereoscopic image pairs can be obtained; There are less limitations on the relative position of the camera during shooting, and the freedom of movement of the camera is large, which can avoid the problem of inaccurate depth detection caused by relative position changes.
本发明的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。Additional aspects and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
附图说明Description of drawings
本发明的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present invention will become apparent and comprehensible from the description of the embodiments in conjunction with the following drawings, wherein:
图1是本发明实施方式的基于单摄像头的深度检测方法的流程示意图;Fig. 1 is a schematic flow chart of a depth detection method based on a single camera according to an embodiment of the present invention;
图2是本发明实施方式的电子装置的功能模块示意图;2 is a schematic diagram of functional modules of an electronic device according to an embodiment of the present invention;
图3是本发明某些实施方式的深度检测方法的状态示意图;3 is a schematic diagram of the state of the depth detection method in some embodiments of the present invention;
图4是本发明某些实施方式的深度检测方法的状态示意图;4 is a schematic diagram of the state of the depth detection method in some embodiments of the present invention;
图5是本发明某些实施方式的深度检测方法的状态示意图;5 is a schematic diagram of the state of the depth detection method in some embodiments of the present invention;
图6是本发明某些实施方式的深度检测方法的流程示意图;Fig. 6 is a schematic flowchart of a depth detection method in some embodiments of the present invention;
图7是本发明某些实施方式的第二控制模块的功能模块示意图;7 is a schematic diagram of functional modules of a second control module in some embodiments of the present invention;
图8是本发明某些实施方式的深度检测方法的流程示意图;Fig. 8 is a schematic flowchart of a depth detection method in some embodiments of the present invention;
图9是本发明某些实施方式的第二控制模块的功能模块示意图;Fig. 9 is a schematic diagram of functional modules of a second control module in some embodiments of the present invention;
图10是本发明某些实施方式的深度检测方法的状态示意图;10 is a schematic diagram of the state of the depth detection method in some embodiments of the present invention;
图11是本发明某些实施方式的深度检测方法的流程示意图;Fig. 11 is a schematic flowchart of a depth detection method in some embodiments of the present invention;
图12是本发明某些实施方式的处理模块的功能模块示意图;和Figure 12 is a functional block diagram of a processing module in some embodiments of the present invention; and
图13是本发明某些实施方式的深度检测方法的状态示意图。Fig. 13 is a schematic diagram of the state of the depth detection method in some embodiments of the present invention.
具体实施方式detailed description
下面详细描述本发明的实施方式,所述实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本发明,而不能理解为对本发明的限制。Embodiments of the present invention are described in detail below, examples of which are shown in the drawings, wherein the same or similar reference numerals denote the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the figures are exemplary only for explaining the present invention and should not be construed as limiting the present invention.
请一并参阅图1至2,本发明实施方式的基于单摄像头的深度检测方法包括以下步骤:Please refer to FIGS. 1 to 2 together. The depth detection method based on a single camera in the embodiment of the present invention includes the following steps:
S11:控制摄像头20在第一位置拍摄当前场景以获得第一图像;S11: Control the camera 20 to capture the current scene at the first position to obtain the first image;
S12:控制摄像头20从第一位置沿垂直于摄像头20的轴向的方向移动至第二位置后控制摄像头20拍摄当前场景的第二图像;S12: Control the camera 20 to move from the first position to the second position in a direction perpendicular to the axial direction of the camera 20, and then control the camera 20 to capture a second image of the current scene;
S13:处理第一图像和第二图像以得到当前场景的深度信息。S13: Process the first image and the second image to obtain depth information of the current scene.
本发明实施方式的基于单摄像头的深度检测方法用于单个摄像头的深度检测装置10。基于单摄像头的深度检测装置10包括第一控制模块11、第二控制模块12和处理模块13。步骤S11可以由第一控制模块11实现,步骤S12可以由第二控制模块12实现,步骤S13可以由处理模块13实现。The single-camera-based depth detection method in the embodiment of the present invention is used for a single-camera depth detection device 10 . The depth detection device 10 based on a single camera includes a first control module 11 , a second control module 12 and a processing module 13 . Step S11 can be realized by the first control module 11 , step S12 can be realized by the second control module 12 , and step S13 can be realized by the processing module 13 .
也即是说,第一控制模块11用于控制摄像头20在第一位置拍摄当前场景以获得第一图像;第二控制模块12用于控制摄像头20从第一位置沿垂直于摄像头20的轴向的方向移动至第二位置后控制摄像头20拍摄当前场景的第二图像;处理模块13用于处理第一图像和第二图像以得到当前场景的深度信息。That is to say, the first control module 11 is used to control the camera 20 to capture the current scene at the first position to obtain the first image; the second control module 12 is used to control the camera 20 from the first position along the axis perpendicular to the camera After moving to the second position in the direction of the control camera 20 to take a second image of the current scene; the processing module 13 is used to process the first image and the second image to obtain the depth information of the current scene.
本发明实施方式的基于单摄像头的深度检测装置10应用于本发明实施方式的电子装置100。也即是说,本发明实施方式的电子装置100包括本发明实施方式的基于单摄像头的深度检测装置10。当然,本发明实施方式的电子装置100还包括摄像头20和运动传感器30。其中,深度检测装置10与摄像头20和运动传感器30均电连接。The single-camera-based depth detection device 10 in the embodiment of the present invention is applied to the electronic device 100 in the embodiment of the present invention. That is to say, the electronic device 100 in the embodiment of the present invention includes the depth detection device 10 based on a single camera in the embodiment of the present invention. Of course, the electronic device 100 in the embodiment of the present invention also includes a camera 20 and a motion sensor 30 . Wherein, the depth detection device 10 is electrically connected to the camera 20 and the motion sensor 30 .
在某些实施方式中,电子装置100包括手机、平板电脑、笔记本电脑、智能手表、智能手环、智能眼镜等,在此不做任何限制。在本发明的具体实施例中,电子装置100为手机。In some embodiments, the electronic device 100 includes a mobile phone, a tablet computer, a laptop computer, a smart watch, a smart bracelet, smart glasses, etc., without any limitation here. In a specific embodiment of the present invention, the electronic device 100 is a mobile phone.
具体地,请参阅图3至5,图中Xl为图像中杯子底部中点至图像最左端的最短距离,Xr为图像中杯子底部中点至图像最右端的最短距离,C为图像的宽度。具体地,首先在第一位置Ol拍摄第一图像,随后移动摄像头20至第二位置Or拍摄第二图像。其中,第一位置Ol与第二位置Or的连接线方向与摄像头20的轴向方向垂直。如此,获得第一图像和第二图像这一对立体图像对,再对上述立体图像对进行处理即可获得当前场景的深度信息。Specifically, please refer to Figures 3 to 5, where X l is the shortest distance from the midpoint of the bottom of the cup in the image to the leftmost end of the image, X r is the shortest distance from the midpoint of the bottom of the cup in the image to the rightmost end of the image, and C is the distance of the image width. Specifically, the first image is captured at the first position O1, and then the camera 20 is moved to the second position Or to capture the second image. Wherein, the direction of the connecting line between the first position O1 and the second position Or is perpendicular to the axial direction of the camera 20 . In this way, a stereoscopic image pair of the first image and the second image is obtained, and then the stereoscopic image pair is processed to obtain the depth information of the current scene.
需要说明的是,摄像头20的轴向的方向指的是与摄像头20拍摄当前场景时的光轴方向相平行的方向。It should be noted that the axial direction of the camera 20 refers to a direction parallel to the direction of the optical axis when the camera 20 captures the current scene.
可以理解,基于双目立体视觉的深度检测方法难以保证两个摄像头的规格完全一致,且两个摄像头之间的相对位置可能因为电子装置100的摔落等外部因素而发生改变,从而影响深度检测的准确性。本发明实施方式的单个摄像头的深度检测方法基于可移动的单个摄像头进行图像拍摄,一方面可获得较为匹配的立体图像对,另一方面对于立体图像对拍摄时摄像头的相对位置的局限性较小,摄像头移动自由度大,可以避免因相对位置变化导致的深度检测不准确的问题。It can be understood that the depth detection method based on binocular stereo vision is difficult to ensure that the specifications of the two cameras are completely consistent, and the relative position between the two cameras may change due to external factors such as the fall of the electronic device 100, thereby affecting the depth detection. accuracy. The depth detection method for a single camera in the embodiment of the present invention is based on a movable single camera for image capture. On the one hand, a relatively matching stereo image pair can be obtained; , the camera has a large degree of freedom of movement, which can avoid the problem of inaccurate depth detection caused by relative position changes.
请参阅图6,在某些实施方式中,步骤S12控制摄像头20从第一位置沿垂直于摄像头20的轴向的方向移动至第二位置后控制摄像头20拍摄当前场景的第二图像的步骤包括以下步骤:Referring to FIG. 6, in some implementations, step S12 controls the camera 20 to move from the first position to the second position along the direction perpendicular to the axial direction of the camera 20, and then controls the camera 20 to capture the second image of the current scene. The following steps:
S121:根据运动传感器30的侦测数据确定摄像头20是否从第一位置沿垂直于摄像头20的轴向的方向移动至第二位置;S121: Determine whether the camera 20 moves from the first position to a second position in a direction perpendicular to the axial direction of the camera 20 according to the detection data of the motion sensor 30;
S122:在摄像头20从第一位置沿垂直于摄像头20的轴向的方向移动至第二位置时控制摄像头20拍摄第二图像;和S122: Control the camera 20 to capture a second image when the camera 20 moves from the first position to a second position in a direction perpendicular to the axial direction of the camera 20; and
S123:在摄像头20未从第一位置沿垂直于摄像头20的轴向的方向移动至第二位置时控制电子装置100发出提示。S123: Control the electronic device 100 to issue a prompt when the camera 20 does not move from the first position to the second position along the direction perpendicular to the axial direction of the camera 20 .
请参阅图7,在某些实施方式中,第二控制模块12包括判断单元121、控制单元122和提示单元123。步骤S121可以由判断单元121实现,步骤S122可以由控制单元122实现,步骤S123可以由提示单元123实现。Referring to FIG. 7 , in some implementations, the second control module 12 includes a judgment unit 121 , a control unit 122 and a prompt unit 123 . Step S121 can be implemented by the judging unit 121 , step S122 can be implemented by the control unit 122 , and step S123 can be implemented by the prompting unit 123 .
也即是说,判断单元121用于根据运动传感器30的侦测数据确定摄像头20是否从第一位置沿垂直于摄像头20的轴向的方向移动至第二位置;控制单元122用于在摄像头20从第一位置沿垂直于摄像头20的轴向的方向移动至第二位置时控制摄像头20拍摄第二图像;提示单元123用于在摄像头20未从第一位置沿垂直于摄像头20的轴向的方向移动至第二位置时控制电子装置100发出提示。That is to say, the judging unit 121 is used to determine whether the camera 20 moves from the first position to the second position along the direction perpendicular to the axial direction of the camera 20 according to the detection data of the motion sensor 30; When moving from the first position along the direction perpendicular to the axial direction of the camera 20 to the second position, the camera 20 is controlled to take a second image; When the direction moves to the second position, the control electronic device 100 sends out a prompt.
请参阅8,具体地,在电子装置100所处的空间中建立空间直角坐标系X-Y-Z。在本发明的具体实施例中,摄像头20的轴向方向可以看作空间直角坐标系中的Y轴方向。X轴方向为与摄像头20的轴向的方向垂直的第一移动方向,Y轴为与摄像头20的轴向方向平行的第二移动方向,Z轴为与摄像头20的轴向方向垂直的第三移动方向。为保持摄像头20从第一位置Ol沿垂直于摄像头20的轴向的方向进行移动,摄像头20在空间直角坐标系中表现为在X轴方向上即第一移动方向上或Z轴方向上即第三移动方向上进行移动。如此,获取两帧分别在不同位置下拍摄的图像以得到立体图像对,在后续步骤中处理这立体图像对即可获得深度信息。若摄像头的移动方向偏离X轴方向或Z轴方向,则需要电子装置100发出提示信息以提醒用户找到正确的拍摄位置从而确保获得合格的立体图像对。Please refer to 8. Specifically, a space Cartesian coordinate system X-Y-Z is established in the space where the electronic device 100 is located. In a specific embodiment of the present invention, the axial direction of the camera 20 can be regarded as the Y-axis direction in the space Cartesian coordinate system. The X-axis direction is the first moving direction perpendicular to the axial direction of the camera 20, the Y-axis is the second moving direction parallel to the axial direction of the camera 20, and the Z-axis is the third moving direction perpendicular to the axial direction of the camera 20. direction of movement. In order to keep the camera 20 moving from the first position 01 in a direction perpendicular to the axial direction of the camera 20, the camera 20 is expressed in the X-axis direction, that is, the first moving direction, or in the Z-axis direction, that is, the second moving direction in the space Cartesian coordinate system. Move in three moving directions. In this way, two frames of images taken at different positions are obtained to obtain a stereoscopic image pair, and the depth information can be obtained by processing the stereoscopic image pair in a subsequent step. If the moving direction of the camera deviates from the X-axis direction or the Z-axis direction, the electronic device 100 needs to send a prompt message to remind the user to find the correct shooting position so as to ensure obtaining a qualified stereoscopic image pair.
在某些实施方式中,运动传感器30包括陀螺仪或加速度传感器。在本发明的具体实施例中,运动传感器30为陀螺仪。In some embodiments, motion sensor 30 includes a gyroscope or an acceleration sensor. In a particular embodiment of the invention, the motion sensor 30 is a gyroscope.
可以理解,陀螺仪可以侦测电子装置100的偏转状态。通过陀螺仪的侦测数据辅助用户纠正摄像头20的移动方向及确定第二位置Or,以便获取可用于后续步骤中进行深度信息获取的第二图像。It can be understood that the gyroscope can detect the deflection state of the electronic device 100 . The detection data of the gyroscope assists the user in correcting the moving direction of the camera 20 and determining the second position Or, so as to obtain a second image that can be used for acquiring depth information in a subsequent step.
请参阅图9,在某些实施方式中,步骤S122在摄像头20从第一位置沿垂直于摄像头20的轴向的方向移动至第二位置时控制摄像头20拍摄第二图像包括以下步骤:Referring to FIG. 9, in some implementations, step S122 controls the camera 20 to capture a second image when the camera 20 moves from the first position to the second position along the direction perpendicular to the axial direction of the camera 20, including the following steps:
S1221:在摄像头20从第一位置沿垂直于摄像头20的轴向的方向移动至第二位置时检测当前场景是否处于运动状态;和S1221: Detect whether the current scene is in a moving state when the camera 20 moves from the first position to a second position in a direction perpendicular to the axis of the camera 20; and
S1222:在当前场景未处于运动状态时控制摄像头20拍摄当前场景的第二图像。S1222: Control the camera 20 to capture a second image of the current scene when the current scene is not in motion.
请参阅图10,在某些实施方式中,控制单元122包括检测子单元1221和控制子单元1222。步骤S1221可以由检测子单元1221实现,步骤S1222可以由控制子单元1222实现。Referring to FIG. 10 , in some implementations, the control unit 122 includes a detection subunit 1221 and a control subunit 1222 . Step S1221 can be implemented by the detection subunit 1221 , and step S1222 can be implemented by the control subunit 1222 .
也即是说,检测子单元1221用于在摄像头20从第一位置沿垂直于摄像头20的轴向的方向移动至第二位置时检测当前场景是否处于运动状态;控制子单元1222用于在当前场景未处于运动状态时控制摄像头20拍摄当前场景的第二图像。That is to say, the detection subunit 1221 is used to detect whether the current scene is in a moving state when the camera 20 moves from the first position to the second position along the direction perpendicular to the axial direction of the camera 20; When the scene is not in motion, the camera 20 is controlled to capture the second image of the current scene.
可以理解,在摄像头20沿垂直于摄像头20的轴向的方向移动至第二位置时,若当前场景处于运动状态,即当前场景中的人或物是运动的,则用摄像头20拍摄出来的第二图像可能是模糊的。如此,后续步骤中可能无法识别出模糊的第二图像中的对应特征点的匹配像素,从而无法使用第一图像和第二图像来获取深度信息。因此,在当前场景处于运动状态时暂不进行拍摄处理。若当前场景未处于运动状态,则控制摄像头20进行当前场景的拍摄以得到第二图像。It can be understood that when the camera 20 moves to the second position along the direction perpendicular to the axial direction of the camera 20, if the current scene is in a moving state, that is, people or objects in the current scene are moving, then the first shot taken by the camera 20 Second, the image may be blurry. In this way, the matching pixels corresponding to the feature points in the blurred second image may not be identified in subsequent steps, so that the first image and the second image cannot be used to obtain depth information. Therefore, when the current scene is in a moving state, the shooting process is temporarily not performed. If the current scene is not in motion, the camera 20 is controlled to capture the current scene to obtain the second image.
请参阅图11,在某些实施方式中,步骤S13处理第一图像和第二图像以得到当前场景的深度信息的步骤包括以下步骤:Please refer to FIG. 11 , in some embodiments, the step S13 of processing the first image and the second image to obtain the depth information of the current scene includes the following steps:
S131:根据运动传感器30的侦测数据获取第一位置与第二位置之间的线性距离;S131: Obtain the linear distance between the first position and the second position according to the detection data of the motion sensor 30;
S132:获取当前场景的对焦主体在第一图像上和在第二图像上的特征点和对应特征点的匹配像素;S132: Obtain the feature points of the focused subject of the current scene on the first image and the second image and matching pixels of the corresponding feature points;
S133:根据摄像头20的参数、线性距离和特征点及匹配像素的坐标计算对焦主体的深度信息。S133: Calculate the depth information of the focused subject according to the parameters of the camera 20, the linear distance, the coordinates of the feature points and the matched pixels.
请参阅图12,在某些实施方式中,处理模块13包括第一获取单元131、第二获取单元132和计算单元133。步骤S131可以由第一获取单元131实现,步骤S132可以由第二获取单元132实现,步骤S133可以由计算单元133实现。Referring to FIG. 12 , in some implementations, the processing module 13 includes a first acquisition unit 131 , a second acquisition unit 132 and a calculation unit 133 . Step S131 can be implemented by the first acquisition unit 131 , step S132 can be implemented by the second acquisition unit 132 , and step S133 can be implemented by the calculation unit 133 .
也即是说,获取单元131用于根据运动传感器30的侦测数据获取第一位置与第二位置之间的线性距离;第二获取单元132用于获取当前场景的对焦主体在第一图像上和在第二图像上的特征点和对应特征点的匹配像素;计算单元133用于根据所述摄像头的参数、所述线性距离和所述特征点和所述匹配像素的坐标计算所述对焦主体的深度信息。That is to say, the acquisition unit 131 is used to acquire the linear distance between the first position and the second position according to the detection data of the motion sensor 30; the second acquisition unit 132 is used to acquire the focus subject of the current scene on the first image and the matching pixels of the feature points and the corresponding feature points on the second image; the calculation unit 133 is used to calculate the focus subject according to the parameters of the camera, the linear distance and the coordinates of the feature points and the matching pixels depth information.
请一并参阅图4、图5和图13,在本发明的具体实施例中,摄像头20的移动方向为沿X轴方向进行平移。图中P为当前场景中用户关注的对焦主体的某一像素点,F为在拍摄第一图像及第二图像时摄像头20的焦距。可以理解,当前场景对应的深度信息是由多个像素点的深度信息组合而成的,也即是说需要获取多个像素点的深度信息才能获得当前场景的深度信息。此外,摄像头20处于第一位置Ol和第二位置Or时,摄像头20的取景画面基本相同,也即是说,当摄像头20分别处于第一位置Ol和第二位置Or时,当前场景的对焦主体均位于摄像头20的取景画面内,如此,对拍摄得的第一图像和第二图像进行特征点识别及特征点对应的像素点的匹配,其中,特征点为当前场景中用户关注的对焦主体的特征点,例如,图4即第一图像和图5即第二图像所示的杯子底部的中点即为要识别的特征点,两幅图像中杯子底部的中点对应的像素即为匹配像素。随后根据摄像头20的参数、线性距离S及识别出的特征点和匹配像素的坐标等信息计算各个匹配像素的深度信息。具体地,可以根据运动传感器30的侦测数据获得第一位置Ol和第二位置Or之间的线性距离S,随后可以根据线性距离S计算图像中某一个特征点对应的匹配像素的深度信息D。深度信息D的计算方法为:如此,采用上述计算公式计算出当前场景中的每一个匹配像素的深度信息D后,即可由这些具有深度信息的匹配像素组成当前场景对应的深度信息。Please refer to FIG. 4 , FIG. 5 and FIG. 13 together. In a specific embodiment of the present invention, the moving direction of the camera 20 is translation along the X-axis direction. In the figure, P is a certain pixel point of the focus subject that the user pays attention to in the current scene, and F is the focal length of the camera 20 when shooting the first image and the second image. It can be understood that the depth information corresponding to the current scene is formed by combining the depth information of multiple pixels, that is to say, the depth information of the current scene needs to be obtained by obtaining the depth information of multiple pixels. In addition, when the camera 20 is at the first position O1 and the second position Or, the framing pictures of the camera 20 are basically the same, that is to say, when the camera 20 is respectively at the first position O1 and the second position Or, the focus subject of the current scene They are all located in the viewfinder screen of the camera 20. In this way, the feature point recognition and the matching of the pixel points corresponding to the feature points are performed on the captured first image and the second image, wherein the feature point is the focus subject that the user pays attention to in the current scene Feature points, for example, the midpoint of the bottom of the cup shown in Figure 4, which is the first image, and Figure 5, which is the second image, is the feature point to be recognized, and the pixel corresponding to the midpoint of the bottom of the cup in the two images is the matching pixel . Then, the depth information of each matching pixel is calculated according to the parameters of the camera 20 , the linear distance S, the identified feature points and the coordinates of the matching pixels. Specifically, the linear distance S between the first position O1 and the second position Or can be obtained according to the detection data of the motion sensor 30, and then the depth information D of a matching pixel corresponding to a certain feature point in the image can be calculated according to the linear distance S . The calculation method of depth information D is: In this way, after the depth information D of each matching pixel in the current scene is calculated by using the above calculation formula, the depth information corresponding to the current scene can be composed of these matching pixels with depth information.
在另一些实施例中,摄像头20也可从第一位置沿Z轴方向移动至第二位置。随后,仍旧根据第一位置与第二位置之间的线性距离S、摄像头20的参数及特征点和匹配像素的坐标等信息计算深度信息。In other embodiments, the camera 20 can also move from the first position to the second position along the Z axis. Subsequently, the depth information is still calculated according to information such as the linear distance S between the first position and the second position, parameters of the camera 20 , and coordinates of feature points and matching pixels.
电子装置100还包括壳体、存储器、电路板和电源电路。其中,电路板安置在壳体围成的空间内部,处理器和存储器设置在电路板上;电源电路用于为电子装置100的各个电路或器件供电;存储器用于存储可执行程序代码;深度检测装置10通过读取存储器中存储的可执行程序代码来运行与可执行程序代码对应的程序以实现上述的本发明任一实施方式的深度检测方法。The electronic device 100 also includes a housing, a memory, a circuit board, and a power supply circuit. Wherein, the circuit board is placed inside the space surrounded by the casing, and the processor and memory are arranged on the circuit board; the power supply circuit is used to supply power to each circuit or device of the electronic device 100; the memory is used to store executable program codes; the depth detection The device 10 executes the program corresponding to the executable program code by reading the executable program code stored in the memory, so as to implement the above-mentioned depth detection method in any embodiment of the present invention.
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”、或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。In the description of this specification, reference to the terms "one embodiment", "some embodiments", "exemplary embodiments", "example", "specific examples", or "some examples" etc. A specific feature, structure, material, or characteristic described in an embodiment or an example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the described specific features, structures, materials or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。Any process or method descriptions in flowcharts or otherwise described herein may be understood to represent modules, segments or portions of code comprising one or more executable instructions for implementing specific logical functions or steps of the process , and the scope of preferred embodiments of the invention includes alternative implementations in which functions may be performed out of the order shown or discussed, including substantially concurrently or in reverse order depending on the functions involved, which shall It is understood by those skilled in the art to which the embodiments of the present invention pertain.
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。The logic and/or steps represented in the flowcharts or otherwise described herein, for example, can be considered as a sequenced listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium, For use with instruction execution systems, devices, or devices (such as computer-based systems, systems including processors, or other systems that can fetch instructions from instruction execution systems, devices, or devices and execute instructions), or in conjunction with these instruction execution systems, devices or equipment used. For the purposes of this specification, a "computer-readable medium" may be any device that can contain, store, communicate, propagate or transmit a program for use in or in conjunction with an instruction execution system, device or device. More specific examples (non-exhaustive list) of computer-readable media include the following: electrical connection with one or more wires (electronic device), portable computer disk case (magnetic device), random access memory (RAM), Read Only Memory (ROM), Erasable and Editable Read Only Memory (EPROM or Flash Memory), Fiber Optic Devices, and Portable Compact Disc Read Only Memory (CDROM). In addition, the computer-readable medium may even be paper or other suitable medium on which the program can be printed, since the program can be read, for example, by optically scanning the paper or other medium, followed by editing, interpretation or other suitable processing if necessary. The program is processed electronically and stored in computer memory.
应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that various parts of the present invention can be realized by hardware, software, firmware or their combination. In the embodiments described above, various steps or methods may be implemented by software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques known in the art: Discrete logic circuits, ASICs with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。Those of ordinary skill in the art can understand that all or part of the steps carried by the methods of the above embodiments can be completed by instructing related hardware through a program, and the program can be stored in a computer-readable storage medium. During execution, one or a combination of the steps of the method embodiments is included.
此外,在本发明各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing module, each unit may exist separately physically, or two or more units may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules. If the integrated modules are realized in the form of software function modules and sold or used as independent products, they can also be stored in a computer-readable storage medium.
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本发明的实施方式,可以理解的是,上述实施方式是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施方式进行变化、修改、替换和变型。The storage medium mentioned above may be a read-only memory, a magnetic disk or an optical disk, and the like. Although the embodiment of the present invention has been shown and described above, it can be understood that the above embodiment is exemplary and should not be construed as a limitation of the present invention, and those skilled in the art can make the above-mentioned The embodiments are subject to changes, modifications, substitutions and variations.
Claims (11)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710138684.5A CN107025666A (en) | 2017-03-09 | 2017-03-09 | Single-camera-based depth detection method, device, and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710138684.5A CN107025666A (en) | 2017-03-09 | 2017-03-09 | Single-camera-based depth detection method, device, and electronic device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107025666A true CN107025666A (en) | 2017-08-08 |
Family
ID=59525923
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710138684.5A Pending CN107025666A (en) | 2017-03-09 | 2017-03-09 | Single-camera-based depth detection method, device, and electronic device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107025666A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107749069A (en) * | 2017-09-28 | 2018-03-02 | 联想(北京)有限公司 | Image processing method, electronic equipment and image processing system |
CN109889709A (en) * | 2019-02-21 | 2019-06-14 | 维沃移动通信有限公司 | A camera module control system, method and mobile terminal |
CN110517305A (en) * | 2019-08-16 | 2019-11-29 | 兰州大学 | A 3D Image Reconstruction Method of Fixed Objects Based on Image Sequence |
CN110800023A (en) * | 2018-07-24 | 2020-02-14 | 深圳市大疆创新科技有限公司 | Image processing method and equipment, camera device and unmanned aerial vehicle |
CN111213364A (en) * | 2018-12-21 | 2020-05-29 | 深圳市大疆创新科技有限公司 | Shooting equipment control method, shooting equipment control device and shooting equipment |
CN112771576A (en) * | 2020-05-06 | 2021-05-07 | 深圳市大疆创新科技有限公司 | Position information acquisition method, device and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105141942A (en) * | 2015-09-02 | 2015-12-09 | 小米科技有限责任公司 | 3d image synthesizing method and device |
CN105376484A (en) * | 2015-11-04 | 2016-03-02 | 深圳市金立通信设备有限公司 | Image processing method and terminal |
CN105989626A (en) * | 2015-02-10 | 2016-10-05 | 深圳超多维光电子有限公司 | Three-dimensional scene construction method and apparatus thereof |
-
2017
- 2017-03-09 CN CN201710138684.5A patent/CN107025666A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105989626A (en) * | 2015-02-10 | 2016-10-05 | 深圳超多维光电子有限公司 | Three-dimensional scene construction method and apparatus thereof |
CN105141942A (en) * | 2015-09-02 | 2015-12-09 | 小米科技有限责任公司 | 3d image synthesizing method and device |
CN105376484A (en) * | 2015-11-04 | 2016-03-02 | 深圳市金立通信设备有限公司 | Image processing method and terminal |
Non-Patent Citations (1)
Title |
---|
许凌羽: "视觉坐标测量机仿真模型的研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107749069A (en) * | 2017-09-28 | 2018-03-02 | 联想(北京)有限公司 | Image processing method, electronic equipment and image processing system |
CN110800023A (en) * | 2018-07-24 | 2020-02-14 | 深圳市大疆创新科技有限公司 | Image processing method and equipment, camera device and unmanned aerial vehicle |
CN111213364A (en) * | 2018-12-21 | 2020-05-29 | 深圳市大疆创新科技有限公司 | Shooting equipment control method, shooting equipment control device and shooting equipment |
CN109889709A (en) * | 2019-02-21 | 2019-06-14 | 维沃移动通信有限公司 | A camera module control system, method and mobile terminal |
CN110517305A (en) * | 2019-08-16 | 2019-11-29 | 兰州大学 | A 3D Image Reconstruction Method of Fixed Objects Based on Image Sequence |
CN110517305B (en) * | 2019-08-16 | 2022-11-04 | 兰州大学 | Image sequence-based fixed object three-dimensional image reconstruction method |
CN112771576A (en) * | 2020-05-06 | 2021-05-07 | 深圳市大疆创新科技有限公司 | Position information acquisition method, device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3248374B1 (en) | Method and apparatus for multiple technology depth map acquisition and fusion | |
JP6626954B2 (en) | Imaging device and focus control method | |
CN107025666A (en) | Single-camera-based depth detection method, device, and electronic device | |
CN105659580B (en) | A kind of Atomatic focusing method, device and electronic equipment | |
CN107223330B (en) | Depth information acquisition method and device and image acquisition equipment | |
CN108076278B (en) | A kind of automatic focusing method, device and electronic equipment | |
KR102661185B1 (en) | Electronic device and method for obtaining images | |
CN105554403B (en) | Control method, control device and electronic device | |
CN107924104A (en) | Depth sense focuses on multicamera system automatically | |
US9781412B2 (en) | Calibration methods for thick lens model | |
CN103837129B (en) | Distance-finding method in a kind of terminal, device and terminal | |
KR102794864B1 (en) | Electronic device for using depth information and operating method thereof | |
CN108260360B (en) | Scene depth calculation method and device and terminal | |
WO2019011091A1 (en) | Photographing reminding method and device, terminal and computer storage medium | |
CN108700408A (en) | Three-dimensional shape data and texture information generation system, shooting control program, and three-dimensional shape data and texture information generation method | |
CN104469167A (en) | Automatic focusing method and device | |
US11042984B2 (en) | Systems and methods for providing image depth information | |
KR20200027276A (en) | Electronic device for obtaining images by controlling frame rate for external object moving through point ofinterest and operating method thereof | |
CN114466129A (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN106170064A (en) | Camera focusing method, system and electronic equipment | |
CN109714539B (en) | Image acquisition method and device based on gesture recognition and electronic equipment | |
CN105739706A (en) | Control method, control device and electronic device | |
US11283970B2 (en) | Image processing method, image processing apparatus, electronic device, and computer readable storage medium | |
CN105704378A (en) | Control method, control device and electronic device | |
TW201642008A (en) | Image capturing device and dynamic focus method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170808 |
|
RJ01 | Rejection of invention patent application after publication |