CN107465851A - Image depth of field measuring method and image extracting device - Google Patents
Image depth of field measuring method and image extracting device Download PDFInfo
- Publication number
- CN107465851A CN107465851A CN201610390952.8A CN201610390952A CN107465851A CN 107465851 A CN107465851 A CN 107465851A CN 201610390952 A CN201610390952 A CN 201610390952A CN 107465851 A CN107465851 A CN 107465851A
- Authority
- CN
- China
- Prior art keywords
- image
- mtd
- area
- light incident
- depth information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000003287 optical effect Effects 0.000 claims abstract description 57
- 238000001514 detection method Methods 0.000 claims abstract description 48
- 238000012360 testing method Methods 0.000 claims description 27
- 238000000691 measurement method Methods 0.000 claims description 10
- 238000003709 image segmentation Methods 0.000 claims description 5
- 238000012935 Averaging Methods 0.000 claims 4
- 230000000873 masking effect Effects 0.000 claims 2
- 238000010586 diagram Methods 0.000 description 16
- 238000013461 design Methods 0.000 description 8
- 238000000605 extraction Methods 0.000 description 7
- 230000007547 defect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/2224—Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
- H04N5/2226—Determination of depth image, e.g. for foreground/background separation
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
技术领域technical field
本发明涉及影像光学技术领域,尤其涉及一种影像景深测量方法以及影像提取装置。The invention relates to the technical field of image optics, in particular to an image depth-of-field measurement method and an image extraction device.
背景技术Background technique
近年来,随着电子工业的演进以及工业技术的蓬勃发展,各种电子装置设计及开发的走向逐渐朝轻便、易于携带的方向发展,以利使用者随时随地应用于行动商务、娱乐或休闲等用途。举例而言,各式各样的影像提取装置正广泛应用于各种领域,例如智能手机、穿戴式电子装置、空拍装置等电子装置,其具有体积小且方便携带的优点,使用者得以于有使用需求时随时取出并进行影像提取并储存,或进一步通过行动网络上传至网际网络之中,不仅具有重要的商业价值,更让一般大众的日常生活增添色彩。而随着生活品质的提升,人们对影像有更多的诉求,特别是希望所获得的影像具有更高的成像品质或更多的成像效果。In recent years, with the evolution of the electronic industry and the vigorous development of industrial technology, the design and development of various electronic devices are gradually moving towards portable and easy-to-carry directions, so that users can use them anytime, anywhere for mobile business, entertainment or leisure. use. For example, various image capture devices are widely used in various fields, such as smart phones, wearable electronic devices, aerial photography devices and other electronic devices, which have the advantages of small size and easy portability, allowing users to Take out and extract and store images at any time when there is a need for use, or upload them to the Internet through the mobile network, which not only has important commercial value, but also adds color to the daily life of the general public. With the improvement of the quality of life, people have more demands for images, especially hope that the obtained images have higher imaging quality or more imaging effects.
举例来说,请参阅图1,其为现有智能手机的外观结构示意图。智能手机9上设置有两个水平排列的光学镜头91、92,且由于该两个光学镜头91、92可以不同的角度对同一使用环境进行拍摄,故通过对分别由该两个光学镜头91、92所提取的两个影像进行分析与计算,就可获得具有景深信息的立体影像。而目前宏达电(HTC)、索尼(SONY)、乐金(LG)、华为等厂商皆有出产类似于图1所示具有双光学镜头91、92的智能手机9,故通过两个光学镜头91、92获得景深信息的技术为本领域技术人员所知悉,在此即不再予以赘述。For example, please refer to FIG. 1 , which is a schematic diagram of the appearance and structure of an existing smart phone. The smart phone 9 is provided with two optical lenses 91, 92 arranged horizontally, and since the two optical lenses 91, 92 can take pictures of the same use environment at different angles, by using the two optical lenses 91, 92 respectively, 92, the two extracted images are analyzed and calculated to obtain a stereoscopic image with depth information. At present, manufacturers such as HTC (HTC), Sony (SONY), LG (LG), and Huawei all produce smart phones 9 similar to those shown in FIG. 1 with dual optical lenses 91, 92. 92 The technology for obtaining the depth of field information is known to those skilled in the art, and will not be repeated here.
然而,通过两个光学镜头91、92对同一使用环境进行拍摄来获得景深信息的方法具有精度不足的缺陷,原因在于,若使用环境中具有与两个光学镜头的排列方向相同的两个景物,即该两个景物相对应于智能手机9而言也是呈水平排列,则该两个景物的景深不容易被判断,导致所获得的景深信息具有误差。是以,现有获得影像的景深信息的方法具有改善的空间。However, the method of obtaining the depth of field information by shooting the same use environment with two optical lenses 91, 92 has the defect of insufficient precision, because if there are two scenes in the use environment that are arranged in the same direction as the two optical lenses, That is, the two scenes are also arranged horizontally relative to the smart phone 9, and the depth of field of the two scenes is not easy to be judged, resulting in errors in the obtained depth of field information. Therefore, there is room for improvement in existing methods for obtaining depth information of images.
发明内容Contents of the invention
本发明的一目的在提供一种影像提取装置,尤其关于一种应用上述影像景深测量方法的影像提取装置。An object of the present invention is to provide an image capture device, in particular to an image capture device using the above image depth measurement method.
于一较佳实施例中,本发明提供一种影像景深测量方法,包括:In a preferred embodiment, the present invention provides a method for measuring the depth of field of an image, including:
(a)利用一第一光学镜头提取一第一影像以及利用一第二光学镜头提取一第二影像;其中,该第一影像具有多个相位检测像素组,并被分割为多个影像区域;(a) using a first optical lens to extract a first image and utilizing a second optical lens to extract a second image; wherein, the first image has a plurality of phase detection pixel groups and is divided into a plurality of image regions;
(b)利用该第一影像与该第二影像的差异获得每一该影像区域的一第一区域景深信息;(b) using the difference between the first image and the second image to obtain a first area depth information of each image area;
(c)通过计算每一该影像区域的该第一区域景深信息的一水平方向影像强度以及一垂直方向影像强度而决定选取该第一区域景深信息或一第二区域景深信息为一被采用的区域景深信息;其中,该第二区域景深信息通过该多个相位检测像素组中的至少部分相位检测像素组而获得。(c) determining whether to select the first area depth of field information or a second area depth of field information as an adopted one by calculating a horizontal direction image intensity and a vertical direction image intensity of the first area field depth information of each of the image areas Area depth of field information; wherein, the second area depth of field information is obtained through at least part of the phase detection pixel groups in the plurality of phase detection pixel groups.
于一较佳实施例中,本发明亦提供一种影像提取装置,包括:In a preferred embodiment, the present invention also provides an image extraction device, comprising:
一第一光学镜头;a first optical lens;
一第二光学镜头;a second optical lens;
一第一感测元件,其感应经过该第一光学镜头并入射至该第一感测元件的光束以获得一第一影像,且该第一影像供被分割为多个影像区域;其中,该第一感测元件包括多个相位检测单元组,以使该第一影像中具有分别相对应于该多个相位检测单元组的多个相位检测像素组;A first sensing element, which senses the light beam passing through the first optical lens and incident on the first sensing element to obtain a first image, and the first image is divided into a plurality of image areas; wherein, the The first sensing element includes a plurality of phase detection unit groups, so that the first image has a plurality of phase detection pixel groups respectively corresponding to the plurality of phase detection unit groups;
一第二感测元件,其感应经过该第二光学镜头并入射至该第二感测元件的光束以获得一第二影像;a second sensing element, which senses the light beam passing through the second optical lens and incident on the second sensing element to obtain a second image;
一运算模块,连接于该第一感测元件以及该第二感测元件,其利用该第一影像与该第二影像的差异获得每一该影像区域的一第一区域景深信息,且通过计算每一该影像区域的该第一区域景深信息的一水平方向影像强度以及一垂直方向影像强度而决定选取该第一区域景深信息或一第二区域景深信息为一被采用的区域景深信息;其中,该第二区域景深信息通过该多个相位检测像素组中的至少部分相位检测像素组而获得。A computing module, connected to the first sensing element and the second sensing element, which uses the difference between the first image and the second image to obtain a first area field depth information of each image area, and calculates A horizontal direction image intensity and a vertical direction image intensity of the first region depth of field information of each of the image regions determine to select the first region depth of field information or a second region depth of field information as a adopted region depth of field information; wherein , the second area depth information is obtained by at least part of the phase detection pixel groups in the plurality of phase detection pixel groups.
本发明的一目的在提供一种影像景深测量方法,该影像景深测量方法除了通过两个光学镜头获得初步的影像景深信息外,还辅助搭配从其中一个光学镜头所提取的影像的相位检测像素组而获得的另一影像景深信息,借以补足初步的影像景深信息中因被拍摄的环境中具有与两个光学镜头的排列方向相同的景物所导致的影像景深误差。An object of the present invention is to provide a method for measuring image depth of field. In addition to obtaining preliminary image depth of field information through two optical lenses, the image depth of field measurement method also assists in matching the phase detection pixel group of the image extracted from one of the optical lenses. The obtained other image depth information is used to supplement the image depth error in the preliminary image depth information caused by the scene being shot in the same direction as the arrangement direction of the two optical lenses.
附图说明Description of drawings
图1为现有智能手机的外观结构示意图。FIG. 1 is a schematic diagram of an appearance structure of an existing smart phone.
图2为本发明影像提取装置于一较佳实施例的方块示意图。FIG. 2 is a schematic block diagram of a preferred embodiment of the image capture device of the present invention.
图3为图2所示影像提取装置的第一感测元件的一较佳概念示意图。FIG. 3 is a better conceptual diagram of a first sensing element of the image capture device shown in FIG. 2 .
图4为本发明影像景深测量方法的一较佳方法流程图。FIG. 4 is a flow chart of a preferred method of the image depth measurement method of the present invention.
图5A为图4所述步骤S1中所获得的第一影像的一较佳概念示意图。FIG. 5A is a better conceptual diagram of the first image obtained in step S1 of FIG. 4 .
图5B为图4所述步骤S1中所获得的第二影像的一较佳概念示意图。FIG. 5B is a better conceptual diagram of the second image obtained in step S1 of FIG. 4 .
图6为图4所述步骤S3中获得第一影像的任一影像区域的第二区域景深信息的一较佳方法流程图。FIG. 6 is a flow chart of a preferred method for obtaining the second area depth information of any image area of the first image in step S3 of FIG. 4 .
图7A为图6所述第一图像的一较佳概念示意图。FIG. 7A is a better conceptual diagram of the first image shown in FIG. 6 .
图7B为图6所述第二图像的一较佳概念示意图。FIG. 7B is a better conceptual diagram of the second image shown in FIG. 6 .
图8为图2所示影像提取装置的运算模块于一较佳实施例的方块示意图。FIG. 8 is a schematic block diagram of a computing module of the image capture device shown in FIG. 2 in a preferred embodiment.
附图标记说明:Explanation of reference signs:
1影像提取装置 2影像1 Image extraction device 2 Image
4第二影像 9智能手机4Second Image 9Smartphone
11第一光学镜头 12第一感测元件11 The first optical lens 12 The first sensing element
13第二光学镜头 14第二感测元件13 The second optical lens 14 The second sensing element
15运算模块 21影像区域15 Operation module 21 Image area
22相位检测像素组 31第一图像22 phase detection pixel group 31 first image
32第二图像 41影像区域32Second image 41Image area
91光学镜头 92光学镜头91 optical lens 92 optical lens
121相位检测单元组 151影像分割单元121 phase detection unit group 151 image segmentation unit
152运算单元 221第一入光部152 Computing unit 221 First light incident part
222第二入光部 311第一图像区块222 second light incident part 311 first image block
321第二图像区块 3221测试区块321 second image block 322 1 test block
322m测试区块 3231测试区块322 m test block 323 1 test block
323n测试区块 1211第一入光相位检测单元323 n test block 1211 first incident light phase detection unit
1212第二入光相位检测单元 P1中心位置1212 Center position of the second incident light phase detection unit P1
P21中心位置 P2m中心位置P2 1 center position P2 m center position
P31中心位置 P3n中心位置P3 1 center position P3 n center position
S1步骤 S2步骤S1 step S2 step
S3步骤S3 step
具体实施方式detailed description
首先说明本发明影像提取装置的组成。请参阅图2与图3,图2为本发明影像提取装置于一较佳实施例的方块示意图,图3为图2所示影像提取装置的第一感测元件的一较佳概念示意图。影像提取装置1包括第一光学镜头11、第一感测元件12、第二光学镜头13、第二感测元件14以及运算模块15,第一光学镜头11与第二光学镜头13的排列方向呈水平排列,且第一感测元件12感应经过第一光学镜头11并入射至第一感测元件12的光束以获得第一影像,而第二感测元件14则感应经过第二光学镜头13并入射至第二感测元件14的光束以获得第二影像。First, the composition of the image capture device of the present invention will be described. Please refer to FIG. 2 and FIG. 3 , FIG. 2 is a schematic block diagram of a preferred embodiment of the image capture device of the present invention, and FIG. 3 is a schematic diagram of a better concept of the first sensing element of the image capture device shown in FIG. 2 . The image extraction device 1 includes a first optical lens 11, a first sensing element 12, a second optical lens 13, a second sensing element 14, and a computing module 15. The arrangement direction of the first optical lens 11 and the second optical lens 13 is arranged horizontally, and the first sensing element 12 senses the light beam passing through the first optical lens 11 and incident on the first sensing element 12 to obtain the first image, while the second sensing element 14 senses the light beam passing through the second optical lens 13 and The light beam incident to the second sensing element 14 is used to obtain a second image.
其中,第一感测元件12包括多个相位检测单元组121,且每一相位检测单元组121包括第一入光相位检测单元1211以及与第一入光相位检测单元1211呈垂直排列的第二入光相位检测单元1212,而运算模块15连接于第一感测元件12以及第二感测元件14,因此可接收分别来自第一感测元件12以及第二感测元件14的第一影像以及第二影像。Wherein, the first sensing element 12 includes a plurality of phase detection unit groups 121, and each phase detection unit group 121 includes a first incident light phase detection unit 1211 and a second light incident phase detection unit 1211 vertically arranged. The incident light phase detection unit 1212, and the computing module 15 is connected to the first sensing element 12 and the second sensing element 14, so it can receive the first image and the first image from the first sensing element 12 and the second sensing element 14 respectively. Second image.
接下来说明影像提取装置是如何进行影像景深测量。请参阅图4~图5B,图4为本发明影像景深测量方法的一较佳方法流程图,图5A为图4所述步骤S1中所获得的第一影像的一较佳概念示意图,图5B为图4所述步骤S1中所获得的第二影像的一较佳概念示意图,影像景深测量方法包括步骤S1~步骤S3,以下分别对步骤S1~步骤S3进行详述。Next, it will be described how the image capture device measures the image depth of field. Please refer to FIGS. 4 to 5B. FIG. 4 is a flow chart of a preferred method of the image depth measurement method of the present invention. FIG. 5A is a schematic diagram of a preferred concept of the first image obtained in step S1 in FIG. 4, and FIG. 5B It is a schematic diagram of a better concept of the second image obtained in step S1 in FIG. 4 . The image depth measurement method includes steps S1 to S3 . Steps S1 to S3 are described in detail below.
步骤S1,当影像提取装置1欲进行拍摄时,先通过第一感测元件12感应经过第一光学镜头11并入射至第一感测元件12的光束以获得一第一影像2,并通过第二感测元件14感应经过第二光学镜头13并入射至第二感测元件14的光束以获得一第二影像4;其中,第一影像2可被分割为多个影像区域,且每一影像区域包括至少一像素(pixel),而为了清楚示意本案发明,图5A仅在第一影像2中标示多个影像区域中的一个影像区域21。此外,第二影像4亦可被分割为多个影像区域,同样地,为了清楚示意本案发明,图5B仅在第二影像4中标示多个影像区域中的一个影像区域41,且影像区域41对应于影像区域21。Step S1, when the image capture device 1 is about to take a picture, the first sensing element 12 senses the light beam passing through the first optical lens 11 and incident on the first sensing element 12 to obtain a first image 2, and passes through the first sensing element 12 The second sensing element 14 senses the light beam that passes through the second optical lens 13 and enters the second sensing element 14 to obtain a second image 4; wherein, the first image 2 can be divided into a plurality of image areas, and each image The region includes at least one pixel, and in order to clearly illustrate the present invention, FIG. 5A only marks one image region 21 among the plurality of image regions in the first image 2 . In addition, the second image 4 can also be divided into multiple image regions. Similarly, in order to clearly illustrate the present invention, FIG. 5B only marks one image region 41 among the plurality of image regions in the second image 4, and the image region 41 Corresponds to the image area 21 .
又,由于第一感测元件12上设置有多个个相位检测单元组121,故其所获得的第一影像2中还具有分别相对应于多个相位检测单元组121的多个相位检测像素组22,且每一相位检测像素组22包括相对应于第一入光相位检测单元1211的第一入光部221以及相对应于第二入光相位检测单元1212的第二入光部222,其如图5A所示。Furthermore, since the first sensing element 12 is provided with a plurality of phase detection unit groups 121, the obtained first image 2 also has a plurality of phase detection pixels respectively corresponding to the plurality of phase detection unit groups 121. group 22, and each phase detection pixel group 22 includes a first light incident portion 221 corresponding to the first light incident phase detection unit 1211 and a second light incident portion 222 corresponding to the second light incident phase detection unit 1212, It is shown in Figure 5A.
于本较佳实施例中,第一入光部221以及第二入光部222分别为上入光部以及下入光部,且第一入光部221以及第二入光部222位在同一像素,但实际应用上并不以此为限,例如亦可变更设计为第一入光部221以及第二入光部222分别为下入光像素以及上入光像素,还可变更设计为第一入光部221以及第二入光部222分别位在不同像素。In this preferred embodiment, the first light incident portion 221 and the second light incident portion 222 are respectively an upper light incident portion and a lower light incident portion, and the first light incident portion 221 and the second light incident portion 222 are located at the same pixels, but it is not limited to this in practical application. For example, the design can also be changed so that the first light incident part 221 and the second light incident part 222 are respectively the bottom light incident pixel and the top light incident pixel, and the design can also be changed to the second light incident pixel. The first light incident portion 221 and the second light incident portion 222 are respectively located in different pixels.
步骤S2,通过运算模块15计算第一影像2与第二影像4的差异获得每一影像区域的第一区域景深信息。以图5A与图5所示为例,由于第一影像2的影像区域21与第二影像4的影像区域41相对应,故通过运算模块计算影像区域21以及影像区域41的差异即可获得第一影像2的影像区域21的第一区域景深信息,当然,其他影像区域的第一区域景深信息亦可被同理获得。Step S2 , calculating the difference between the first image 2 and the second image 4 through the computing module 15 to obtain the first area depth information of each image area. Taking FIG. 5A and FIG. 5 as an example, since the image area 21 of the first image 2 corresponds to the image area 41 of the second image 4, the difference between the image area 21 and the image area 41 can be obtained by calculating the difference between the image area 21 and the image area 41 through the calculation module. The first area depth information of the image area 21 of an image 2, of course, the first area depth information of other image areas can also be obtained similarly.
于本较佳实施例中,第一影像以及第二影像的差异通过计算峰值信号噪声比(PSNR)而获得;其中,峰值信号噪声比是一种用来评断二影像的相似度的客观标准,而峰值信号噪声比越高则代表影像相位差异越小。而,上述仅为一实施例,实际应用上并不以此为限,举例来说,第一影像以及第二影像的差异亦可通过一去均值归一化相关法(ZNCC)而获得。此外,有关于如何通过第一影像以及第二影像的差异获得每一影像区域的第一区域景深信息以及如何通过峰值信号噪声比或去均值归一化相关法获得第一影像以及第二影像的差异,为本领域技术人员所知悉,在此即不再予以赘述。In this preferred embodiment, the difference between the first image and the second image is obtained by calculating the peak signal-to-noise ratio (PSNR); wherein, the peak signal-to-noise ratio is an objective criterion for judging the similarity of the two images, The higher the peak signal-to-noise ratio, the smaller the image phase difference. However, the above is only an example, and the practical application is not limited thereto. For example, the difference between the first image and the second image can also be obtained by a ZNCC method. In addition, how to obtain the depth information of the first region of each image region through the difference between the first image and the second image, and how to obtain the first image and the second image through the peak signal-to-noise ratio or normalized correlation method The differences are well known to those skilled in the art, and will not be repeated here.
步骤S3,通过运算模块15计算第一影像2的每一影像区域的第一区域景深信息的一水平方向影像强度以及一垂直方向影像强度,并依据计算结果而决定选取每一影像区域的第一区域景深信息或第二区域景深信息为该影像区域的被采用的区域景深信息;其中,每一影像区域的第二区域景深信息通过第一影像2的多个相位检测像素组22中的至少部分相位检测像素组而获得。而通过步骤S3所获得的第一影像2的所有影像区域的被采用的区域景深信息可供被组合为第一影像2的全域景深信息。Step S3, calculating a horizontal direction image intensity and a vertical direction image intensity of the first area depth information of each image area of the first image 2 through the computing module 15, and deciding to select the first area of each image area according to the calculation result. The area depth of field information or the second area depth of field information is the adopted area depth of field information of the image area; wherein, the second area depth of field information of each image area passes through at least part of the plurality of phase detection pixel groups 22 of the first image 2 Phase detection pixel groups are obtained. The adopted regional depth information of all image regions of the first image 2 obtained through step S3 can be combined into the global depth information of the first image 2 .
进一步而言,当第一影像2的任一影像区域的第一区域景深信息的水平方向影像强度大于垂直方向影像强度时,代表该影像区域的垂直影像信息较少,而由于该影像区域的第二区域景深信息是通过第一影像2的至少部分相位检测像素组22而获得,且每一相位检测像素组22的第一入光部221以及第二入光部222是呈垂直排列,因此该影像区域的第二区域景深信息具有较多的垂直影像信息,故此时运算模块15是选取该影像区域的第二区域景深信息为该影像区域的被采用的区域景深信息,以弥补原先垂直影像信息较少的缺陷。相反地,当第一影像2的任一影像区域的第一区域景深信息的垂直方向影像强度大于水平方向影像强度时,代表该影像区域已具有足够的垂直影像信息,故运算模块15就直接以该影像区域的第一区域景深信息为该影像区域的被采用的区域景深信息。Further, when the horizontal image intensity of the first area depth information of any image area of the first image 2 is greater than the vertical image intensity, it means that the vertical image information of the image area is less, and because the second image area of the image area The two-area depth information is obtained through at least part of the phase detection pixel groups 22 of the first image 2, and the first light incident portion 221 and the second light incident portion 222 of each phase detection pixel group 22 are vertically arranged, so the The second area depth information of the image area has more vertical image information, so the computing module 15 selects the second area depth information of the image area as the adopted area depth information of the image area to make up for the original vertical image information Fewer flaws. On the contrary, when the vertical image intensity of the first area depth information of any image area of the first image 2 is greater than the horizontal image intensity, it means that the image area already has enough vertical image information, so the computing module 15 directly uses The first area depth of field information of the image area is adopted area depth of field information of the image area.
于本较佳实施例中,第一影像2的任一影像区域的第一区域景深信息的水平方向影像强度与垂直方向影像强度通过一遮罩运算(mask opterators)而获得。详言之,上述水平方向影像强度通过一布里威特运算子(Prewitt operator)fx而获得,而上述垂直方向影像强度则通过一另一布里威特运算子fy而获得,In this preferred embodiment, the image intensity in the horizontal direction and the image intensity in the vertical direction of the first area depth information of any image area of the first image 2 are obtained through a mask opterators. Specifically, the image intensity in the horizontal direction is obtained by a Prewitt operator f x , and the image intensity in the vertical direction is obtained by another Prewitt operator f y ,
其中, in,
而,上述仅为一实施例,获得水平方向影像强度与垂直方向影像强度的方式并不以此为限。此外,有关于上述以遮罩运算(mask opterators),如通过布里威特运算子获得水平方向影像强度与垂直方向影像强度的具体实施方式,为本领域技术人员所知悉,在此即不再予以赘述。However, the above is only an embodiment, and the method of obtaining the image intensity in the horizontal direction and the image intensity in the vertical direction is not limited thereto. In addition, regarding the above-mentioned mask opterators, such as obtaining the image intensity in the horizontal direction and the image intensity in the vertical direction through the Brewitt operator, the specific implementation method is known to those skilled in the art, and will not be repeated here. be repeated.
接下来说明第一影像2的任一影像区域的第二区域景深信息是如何通过第一影像2中的相位检测像素组22而获得。请参阅图6~图8,图6为图4所述步骤S3中获得第一影像的任一影像区域的第二区域景深信息的一较佳方法流程图,图7A为图6所述第一图像的一较佳概念示意图,图7B为图6所述第二图像的一较佳概念示意图,图8为图2所示影像提取装置的运算模块于一较佳实施例的方块示意图。Next, it will be described how the second area depth information of any image area in the first image 2 is obtained through the phase detection pixel group 22 in the first image 2 . Please refer to FIGS. 6 to 8. FIG. 6 is a flow chart of a preferred method for obtaining the field depth information of any image area of the first image in step S3 in FIG. 4, and FIG. 7A is the first A better conceptual schematic diagram of the image, FIG. 7B is a better conceptual schematic diagram of the second image shown in FIG. 6 , and FIG. 8 is a block schematic diagram of a preferred embodiment of the computing module of the image extraction device shown in FIG. 2 .
于本较佳实施例中,影像提取装置1的运算模块15包括影像分割单元151以及运算单元152,且影像分割单元151用以聚集第一影像2中的多个第一入光部221以形成一第一图像31以及聚集第一影像2中的多个第二入光部222以形成一第二图像32,其如图7A以及图7B所示。其中,第一图像31上具有分别相对应于第一影像2的多个影像区域的多个第一图像区块,第二图像32上具有分别相对应于第一影像2的多个影像区域的多个第二图像区块,而为了清楚示意本案发明,图7A仅在第一图像31中标示多个第一图像区块中的一个第一图像区块311,且其对应于图5A所标示的影像区域21,而图7B仅在第二图像32中标示多个第二图像区块中的一个第二图像区块321,且其对应于图5A所标示的影像区域21。In this preferred embodiment, the computing module 15 of the image capture device 1 includes an image segmentation unit 151 and a computing unit 152, and the image segmentation unit 151 is used to gather a plurality of first incident light portions 221 in the first image 2 to form A first image 31 and a plurality of second incident light portions 222 in the first image 2 are collected to form a second image 32 , as shown in FIG. 7A and FIG. 7B . Wherein, the first image 31 has a plurality of first image blocks respectively corresponding to a plurality of image regions of the first image 2, and the second image 32 has a plurality of image blocks respectively corresponding to the plurality of image regions of the first image 2. A plurality of second image blocks, and in order to clearly illustrate the present invention, FIG. 7A only marks a first image block 311 in the first image 31 in the first image block, and it corresponds to the one marked in FIG. 5A , and FIG. 7B only marks one second image block 321 among the plurality of second image blocks in the second image 32, and it corresponds to the image area 21 marked in FIG. 5A.
接着,运算单元152接收来自影像分割单元151的第一图像31以及第二图像32,并通过运算单元152分别获得每一第一图像区块与相对应的第二图像区块的图像相位差异以及分别获得每一第一图像区块与多个测试区块的图像相位差异,并通过多个图像相位差异中的一图像相位差异最小者获得第一影像2上的相对应的影像区域的第二区域景深信息;其中,多个测试区块分别部分重叠或邻近于相对应的第二图像区块。Next, the computing unit 152 receives the first image 31 and the second image 32 from the image segmentation unit 151, and obtains the image phase difference between each first image block and the corresponding second image block and The image phase difference between each first image block and multiple test blocks is respectively obtained, and the second image of the corresponding image area on the first image 2 is obtained through the image phase difference among the multiple image phase differences that is the smallest. Area depth information; wherein, a plurality of test blocks partially overlap or are adjacent to corresponding second image blocks.
以下以图5A所示的影像区域21、图7A所示的第一图像区块311以及图7B所示的第二图像区块321为例进行说明。运算单元152用以分别获得第一图像区块311与相对应的第二图像区块321的图像相位差异以及分别获得第一图像区块311与多个测试区块3221、3222…322m、3231、3232…323n的图像相位差异E21、E22…E2m、E31、E32…E3n;其中,多个测试区块3221、3222…322m、3231、3232…323n分别于一水平方向上部分重叠或邻近于相对应的第二图像区块321。The image area 21 shown in FIG. 5A , the first image block 311 shown in FIG. 7A , and the second image block 321 shown in FIG. 7B are taken as examples below for illustration. The computing unit 152 is used to respectively obtain the image phase difference between the first image block 311 and the corresponding second image block 321 and to respectively obtain the first image block 311 and a plurality of test blocks 322 1 , 322 2 . . . 322 m , 323 1 , 323 2 ... 323 n image phase difference E2 1 , E2 2 ... E2 m , E3 1 , E3 2 ... E3 n ; wherein, a plurality of test blocks 322 1 , 322 2 ... 322 m , 323 1 , 323 2 . . . 323 n are respectively partially overlapped or adjacent to the corresponding second image block 321 in a horizontal direction.
于本较佳实施例中,多个测试区块3221、3222…322m、3231、3232…323n中的部分测试区块3221、3222…322m的中心位置P21…P2m位于第二图像区块321的中心位置的左侧,而另一部分测试区块3231、3232…323n的中心位置P31…P3m则位于第二图像区块321的中心位置P1的右侧,且第二图像区块321以及多个测试区块3221、3222…322m、3231、3232…323n具有相同大小;而,上述仅为一实施例,多个测试区块3221、3222…322m、3231、3232…323n的选定并不以此为限,本领域技术人员皆可依据实际应用需求而进行任何均等的变更设计。例如,可变更设计为,多个测试区块是分别于一垂直方向上部分重叠或邻近于相对应的第二图像区块。In this preferred embodiment, the central positions P2 1 ... of some test blocks 322 1 , 322 2 ... 322 m among the plurality of test blocks 322 1 , 322 2 ... 322 m , 323 1 , 323 2 ... 323 n P2 m is located on the left side of the center of the second image block 321 , and the center positions P3 1 . , and the second image block 321 and a plurality of test blocks 322 1 , 322 2 ... 322 m , 323 1 , 323 2 ... 323 n have the same size; and, the above is only an embodiment, and multiple test blocks The selection of blocks 322 1 , 322 2 . . . 322 m , 323 1 , 323 2 . For example, the design can be modified such that a plurality of test blocks are respectively partially overlapped or adjacent to the corresponding second image block in a vertical direction.
特别说明的是,第一影像2中被对焦的区域为影像景深的基准处,若第一影像2的影像区域21是被对焦的区域,第一图像区块311与第二图像区块321之间的图像相位差异E1应趋近于零,而若第一图像区块311与第二图像区块321之间的图像相位差异E1并非趋近于零,则代表第一影像2的影像区域21非为被对焦的区域,此时,第二图像区块321以及多个测试区块3221、3222…322m、3231、3232…323n中与第一图像区块311之间图像相位差异最小的区块的位置,即代表第一影像2的影像区域21与被对焦的区域的相对关系,进而可获得影像区域21的第二区域景深信息。In particular, the focused area in the first image 2 is the reference point of the image depth of field. If the image area 21 of the first image 2 is the focused area, the distance between the first image block 311 and the second image block 321 The image phase difference E1 between the first image block 311 and the second image block 321 should approach zero, and if the image phase difference E1 between the first image block 311 and the second image block 321 is not approaching zero, it represents the image area 21 of the first image 2 It is not a focused area. At this time, the images between the second image block 321 and a plurality of test blocks 322 1 , 322 2 ... 322 m , 323 1 , 323 2 ... 323 n and the first image block 311 The position of the block with the smallest phase difference represents the relative relationship between the image area 21 of the first image 2 and the focused area, and further the field depth information of the second area of the image area 21 can be obtained.
于本较佳实施例中,第二区域景深信息以-m…-2,-1,0,1,2…n的级数表示,举例来说,当与第一图像区块311之间图像相位差异最小的区块为测试区块322m时,影像区域21的第二区域景深信息以-m表示,当与第一图像区块311之间图像相位差异最小的区块为测试区块3222时,影像区域21的第二区域景深信息以-2表示,当与第一图像区块311之间图像相位差异最小的区块为第二图像区块321时,影像区域21的第二区域景深信息以0表示,当与第一图像区块311之间图像相位差异最小的区块为测试区块3231时,影像区域21的第二区域景深信息以1表示,当与第一图像区块311之间图像相位差异最小的区块为测试区块323n时,影像区域21的第二区域景深信息以n表示,当与第一图像区块311之间图像相位差异最小的区块为测试区块为其它测试区块时,影像区域21的第二区域景深信息亦同理类推表示。而,第二区域景深信息的表示方式并不局限于以级数表示。In this preferred embodiment, the depth information of the second area is expressed in series of -m...-2, -1, 0, 1, 2...n, for example, when the image between the first image block 311 When the block with the smallest phase difference is the test block 322 m , the field depth information of the second area of the image area 21 is represented by -m, and the block with the smallest image phase difference with the first image block 311 is the test block 322 When 2 , the field depth information of the second area of the image area 21 is represented by -2. When the block with the smallest image phase difference with the first image block 311 is the second image block 321, the second area of the image area 21 The depth of field information is represented by 0. When the block with the smallest image phase difference with the first image block 311 is the test block 3231, the depth of field information of the second area of the image area 21 is represented by 1 . When the block with the smallest image phase difference between the blocks 311 is the test block 323 n , the second area field depth information of the image area 21 is represented by n, when the block with the smallest image phase difference between the first image block 311 is When the test block is another test block, the field depth information of the second area of the image area 21 is similarly represented. However, the expression manner of the depth information of the second region is not limited to the expression in series.
再者,于本较佳实施例中,第一图像区块311与相对应的第二图像区块321之间的图像相位差异E1以及分别与多个测试区块3221、3222…322m、3231、3232…323n之间的图像相位差异E21、E22…E2m、E31、E32…E3n通过计算峰值信号噪声比(PSNR)而获得。一般来说,峰值信号噪声比是一种用来评断二图像的相似度的客观标准,而峰值信号噪声比越高则代表图像相位差异越小,其为本领域技术人员所知悉,在此即不再予以赘述。Moreover, in this preferred embodiment, the image phase difference E1 between the first image block 311 and the corresponding second image block 321 and the multiple test blocks 322 1 , 322 2 ... 322 m The image phase differences E2 1 , E2 2 . . . E2 m , E3 1 , E3 2 . Generally speaking, the peak signal-to-noise ratio is an objective criterion for judging the similarity of two images, and the higher the peak signal-to-noise ratio, the smaller the image phase difference, which is known to those skilled in the art, and here is No further details will be given.
此外,在实际应用上,评价图像相位差异的标准并不以峰值信号噪声比为限,本领域技术人员亦可依据实际应用需求而进行任何均等的变更设计。举例来说,第一图像区块311与相对应的第二图像区块321之间的图像相位差异E1以及与分别与多个测试区块3221、3222…322m、3231、3232…323n之间的图像相位差异E21、E22…E2m、E31、E32…E3n亦可通过一去均值归一化相关法(ZNCC)而获得。同样地,有关去均值归一化相关法为本领域技术人员所知悉,在此即不再予以赘述。In addition, in practical application, the standard for evaluating image phase difference is not limited to peak signal-to-noise ratio, and those skilled in the art can also make any equivalent modification design according to actual application requirements. For example, the image phase difference E1 between the first image block 311 and the corresponding second image block 321 and the multiple test blocks 322 1 , 322 2 . . . 322 m , 323 1 , 323 2 The image phase differences E2 1 , E2 2 . . . E2 m , E3 1 , E3 2 . Likewise, the normalized correlation method is known to those skilled in the art, so it will not be repeated here.
进一步而言,运算单元152于通过计算峰值信号噪声比或去均值归一化相关法获得第一图像区块311与相对应的第二图像区块321之间的图像相位差异E1以及分别与多个测试区块3221、3222…322m、3231、3232…323n之间的图像相位差异E21、E22…E2m、E31、E32…E3n后,通过多个图像相位差异E21、E22…E2m、E31、E32…E3n中的图像相位差异最小者而找出与第一图像区块311之间相位差异最小的区块的位置,进而可获得第一影像2的影像区域21的第二区域景深信息。Further, the computing unit 152 obtains the image phase difference E1 between the first image block 311 and the corresponding second image block 321 and the multi-phase difference E1 by calculating the peak signal-to-noise ratio or de-meaning normalized correlation method. After the image phase difference E2 1 , E2 2 ... E2 m , E3 1 , E3 2 ... E3 n among the test blocks 322 1 , 322 2 ... 322 m , 323 1 , 323 2 ... 323 n , through multiple images The image phase difference among the phase differences E2 1 , E2 2 . Second area depth information of the image area 21 of the first image 2 .
同理,第一影像2中其他每一影像区域的第二区域景深信息亦可通过第一图像31上相对应于该影像区域的第一图像区块与第二图像31上相对应于该影像区域的第二图像区块并通过上述说明的方法来获得。而,上述仅为一实施例,通过第一影像2的多个相位检测像素组22中的至少部分相位检测像素组获得每一影像区域21的第二区域景深信息的方法并不以图6所示为限。Similarly, the depth of field information of the second area of each other image area in the first image 2 can also be obtained through the first image block corresponding to the image area on the first image 31 and the corresponding area on the second image 31. The second image block of the region is obtained through the method described above. However, the above is only an embodiment, and the method for obtaining the second region field depth information of each image region 21 through at least part of the phase detection pixel groups 22 in the first image 2 is not as shown in FIG. 6 shown as a limit.
根据以上的说明可知,本发明影像提取装置以及影像景深测量方法除了通过两个光学镜头获得初步的影像景深信息外,还辅助搭配从其中一个光学镜头所提取的影像的相位检测像素组而获得的另一影像景深信息,以补足初步的影像景深信息中因被拍摄的环境中具有与两个光学镜头的排列方向相同的景物所导致的影像景深误差。补充说明的是,若是仅仅通过单一光学镜头并通过其所提取的影像的相位检测像素组获得影像景深信息,会因拍摄环境亮度不足或是平滑区域(也就是对比度较低的区域)而效果不佳,而采用本案影像提取装置以及影像景深测量方法即能够克服上述的缺陷。According to the above description, it can be seen that the image extraction device and the image depth measurement method of the present invention not only obtain the preliminary image depth information through two optical lenses, but also assist in matching the phase detection pixel group of the image extracted from one of the optical lenses. The other image depth information is used to make up for the image depth error in the preliminary image depth information caused by the scene being shot in the same direction as the arrangement direction of the two optical lenses. It should be added that if the image depth information is only obtained through a single optical lens and the phase detection pixel group of the extracted image, the effect will be ineffective due to insufficient brightness of the shooting environment or a smooth area (that is, an area with low contrast). The above-mentioned defects can be overcome by using the image extraction device and the image depth-of-field measurement method of this case.
当然,上述皆仅为实施例,本领域技术人员亦可依据实际应用需求而进行任何均等的变更设计。举例来说,可变更设计为,图2所示影像提取装置12的第一光学镜头11与第二光学镜头13的排列方向呈垂直排列,而第一感测元件12的每一相位检测单元组121的第一入光相位检测单元1211与第二入光相位检测单元1212呈水平排列,也就是相对应于第一入光相位检测单元1211的第一入光部221以及相对应于第二入光相位检测单元1212的第二入光部222分别为左入光像素以及右入光像素,抑或是分别为右入光像素以及左入光像素,因此,图4所述步骤S3中则应相对变更设计为,当第一影像2的任一影像区域的第一区域景深信息的水平方向影像强度大于垂直方向影像强度时,运算模块15是直接以该影像区域的第一区域景深信息为该影像区域的被采用的区域景深信息,而当第一影像2的任一影像区域的第一区域景深信息的垂直方向影像强度大于水平方向影像强度时,则运算模块15选择该影像区域的第二区域景深信息为该影像区域的被采用的区域景深信息。Of course, the above are only examples, and those skilled in the art can also make any equivalent design changes according to actual application requirements. For example, the design can be changed so that the arrangement direction of the first optical lens 11 and the second optical lens 13 of the image capture device 12 shown in FIG. 2 is vertically arranged, and each phase detection unit group of the first sensing element 12 The first incident light phase detection unit 1211 and the second incident light phase detection unit 1212 of 121 are horizontally arranged, that is, corresponding to the first light incident part 221 of the first incident light phase detection unit 1211 and corresponding to the second incident light phase detection unit 1211 The second light incident part 222 of the optical phase detection unit 1212 is respectively a left light incident pixel and a right light incident pixel, or is a right light incident pixel and a left light incident pixel respectively, therefore, in step S3 described in FIG. The modified design is that when the horizontal image intensity of the first area depth information of any image area of the first image 2 is greater than the vertical image intensity, the computing module 15 directly uses the first area depth information of the image area as the image The adopted regional field depth information of the region, and when the vertical direction image strength of the first region field depth information of any image region of the first image 2 is greater than the horizontal direction image strength, the computing module 15 selects the second region of the image region The depth of field information is the adopted area depth of field information of the image area.
以上所述仅为本发明的较佳实施例,并非用以限定本发明的权利要求,因此凡其它未脱离本发明所公开的精神下所完成的等效改变或修饰,均应包含于本案的权利要求内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the claims of the present invention. Therefore, all other equivalent changes or modifications that do not deviate from the disclosed spirit of the present invention should be included in the scope of this case. within the claims.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610390952.8A CN107465851A (en) | 2016-06-03 | 2016-06-03 | Image depth of field measuring method and image extracting device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610390952.8A CN107465851A (en) | 2016-06-03 | 2016-06-03 | Image depth of field measuring method and image extracting device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107465851A true CN107465851A (en) | 2017-12-12 |
Family
ID=60544932
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610390952.8A Withdrawn CN107465851A (en) | 2016-06-03 | 2016-06-03 | Image depth of field measuring method and image extracting device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107465851A (en) |
-
2016
- 2016-06-03 CN CN201610390952.8A patent/CN107465851A/en not_active Withdrawn
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110036410B (en) | Apparatus and method for obtaining distance information from view | |
US9544574B2 (en) | Selecting camera pairs for stereoscopic imaging | |
Garg et al. | Learning single camera depth estimation using dual-pixels | |
CN108141527B (en) | Phase Detection Autofocus Algorithm | |
CN108141571B (en) | Maskless Phase Detection Autofocus | |
KR101121034B1 (en) | System and method for obtaining camera parameters from multiple images and computer program products thereof | |
JP2017520050A (en) | Local adaptive histogram flattening | |
US20230033956A1 (en) | Estimating depth based on iris size | |
CN103188443B (en) | Video generation device, digital camera and method | |
JP6555990B2 (en) | Distance measuring device, imaging device, and distance measuring method | |
CN102589529A (en) | Scanning close-range photogrammetry method | |
TWI604221B (en) | Method for measuring depth of field and image pickup device using the same | |
EP2750391B1 (en) | Method, apparatus and computer program product for processing of images | |
CN106973199B (en) | Multi-aperture camera system using focus distance scanning to improve depth accuracy | |
CN107465849A (en) | Image depth of field measuring method and image capturing device using same | |
CN110896469B (en) | Resolution testing method for three-shot photography and application thereof | |
CN107465851A (en) | Image depth of field measuring method and image extracting device | |
TWI583996B (en) | Image depth of field measuring method and image capturing device applying the same | |
KR102785831B1 (en) | Device and method for obtaining distance information from a view | |
Mustaniemi et al. | Disparity estimation for image fusion in a multi-aperture camera | |
WO2015055892A1 (en) | Method, apparatus and computer program product for detection and correction of image defect | |
CN106851081A (en) | Image focusing method and image extracting device using same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20171212 |
|
WW01 | Invention patent application withdrawn after publication |