CN104102068B - Autofocus method and autofocus device - Google Patents
Autofocus method and autofocus device Download PDFInfo
- Publication number
- CN104102068B CN104102068B CN201310124420.6A CN201310124420A CN104102068B CN 104102068 B CN104102068 B CN 104102068B CN 201310124420 A CN201310124420 A CN 201310124420A CN 104102068 B CN104102068 B CN 104102068B
- Authority
- CN
- China
- Prior art keywords
- depth
- depth information
- target
- focus
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 70
- 238000012545 processing Methods 0.000 claims description 51
- 230000008569 process Effects 0.000 claims description 12
- 230000008859 change Effects 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000005457 optimization Methods 0.000 claims description 7
- 238000009499 grossing Methods 0.000 claims description 6
- 230000005484 gravity Effects 0.000 claims description 5
- 238000001514 detection method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000029058 respiratory gaseous exchange Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
Landscapes
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本发明涉及一种自动对焦的技术,且特别是有关于一种应用立体视觉图像处理技术进行自动对焦的方法及自动对焦装置。The present invention relates to an automatic focusing technology, and in particular to an automatic focusing method and an automatic focusing device using a stereo vision image processing technology.
背景技术Background technique
数码相机拥有相当精密复杂的机械结构,更加强化了相机的功能性与操控性。除了使用者的拍摄技巧与四周环境影响等因素之外,数码相机内建的自动对焦(Auto Focus,AF)系统,对于拍摄出来的图像画面品质也有相当大的影响。Digital cameras have a fairly sophisticated and complex mechanical structure, which further enhances the functionality and handling of the camera. In addition to factors such as the shooting skills of the user and the influence of the surrounding environment, the built-in Auto Focus (AF) system of the digital camera also has a considerable impact on the quality of the captured image.
一般而言,自动对焦技术是指数码相机会移动镜头以变更镜头与被摄物体之间的距离,并对应不同的镜头位置以分别计算被摄主体画面的对焦评估值(以下简称为对焦值),直到找寻到最大对焦值为止。具体而言,镜头的最大对焦值是表示对应目前镜头所在的位置能取得最大清晰度的被摄主体画面。然而,目前的自动对焦技术中所使用的爬山法(hill-climbing)或回归法(regression)中,镜头的连续推移以及最大对焦值的搜寻时间都需要若干幅图像才能达成一次的对焦,因此容易耗费许多时间。此外,在数码相机移动镜头的过程中可能会移动过头,而需要使镜头来回移动,如此一来,将会造成画面的边缘部分可能会有进出画面的现象,此即镜头画面的呼吸现象,而此现象破坏了画面的稳定性。Generally speaking, auto-focus technology means that a digital camera will move the lens to change the distance between the lens and the subject, and corresponding to different lens positions to calculate the focus evaluation value of the subject picture (hereinafter referred to as the focus value) , until the maximum focus value is found. Specifically, the maximum focus value of the lens means that the subject image with the maximum definition can be obtained corresponding to the current position of the lens. However, in the hill-climbing or regression method (regression) used in the current autofocus technology, the continuous movement of the lens and the search time for the maximum focus value require several images to achieve one focus, so it is easy Takes a lot of time. In addition, in the process of moving the lens of the digital camera, the head may be moved too much, and the lens needs to be moved back and forth. In this way, the edge part of the picture may enter and exit the picture, which is the breathing phenomenon of the lens picture, and This phenomenon destabilizes the picture.
另一方面,现有一种立体视觉技术进行图像处理并建立图像三维深度信息的自动对焦技术,可有效减少对焦的耗时及画面的呼吸现象,而可提升对焦速度与画面的稳定性,故在相关领域中渐受瞩目。然而,一般而言,目前的立体视觉技术图像处理在进行图像中各像素点的三维座位位置信息求取时,常常无法对图像中的各点位置做出精准的定位。由于在无材质(texture)、平坦区等区域,较不易辨识相对深度而无法精确求出各点的深度信息,因此可能会造成三维深度图上的破洞。此外,若将自动对焦技术应用于手持电子装置(例如智能手机),为要求缩小产品的体积,其立体视觉的基准线(stereo baseline)通常需要尽可能地缩小,如此一来定位精准将更加不易,并可能导致三维深度图上的破洞增加,进而影响后续图像对焦程序的执行。On the other hand, there is an existing stereo vision technology for image processing and automatic focus technology to establish three-dimensional depth information of the image, which can effectively reduce the time-consuming focus and the breathing phenomenon of the picture, and can improve the focus speed and picture stability. attention in related fields. However, generally speaking, the current stereo vision technology image processing often fails to accurately locate the position of each point in the image when obtaining the three-dimensional seat position information of each pixel point in the image. Since it is difficult to identify the relative depth in areas without texture and flat areas, it is difficult to accurately obtain the depth information of each point, which may cause holes in the 3D depth map. In addition, if the autofocus technology is applied to handheld electronic devices (such as smart phones), in order to reduce the size of the product, its stereo vision baseline (stereo baseline) usually needs to be reduced as much as possible, so that accurate positioning will be more difficult , and may lead to the increase of holes on the 3D depth map, thereby affecting the execution of subsequent image focusing procedures.
发明内容Contents of the invention
本发明提供一种自动对焦方法及自动对焦装置,具有快速的对焦速度及良好的画面稳定性。The invention provides an automatic focusing method and an automatic focusing device, which have fast focusing speed and good picture stability.
本发明的一种自动对焦方法适用于自动对焦装置,其中自动对焦装置包括第一与第二图像传感器。自动对焦方法包括下列步骤。选取目标物,并使用第一与第二图像传感器拍摄目标物,以产生第一图像与第二图像。并依据第一图像与第二图像进行三维深度估测,以产生三维深度图。且对三维深度图进行优化处理,以产生优化三维深度图。更依据优化三维深度图判断目标物对应的深度信息,并依据深度信息取得关于目标物的对焦位置。接着,驱动自动对焦装置根据对焦位置执行自动对焦程序。An autofocus method of the present invention is applicable to an autofocus device, wherein the autofocus device includes first and second image sensors. The autofocus method includes the following steps. A target is selected, and the first and second image sensors are used to photograph the target to generate a first image and a second image. And perform 3D depth estimation according to the first image and the second image to generate a 3D depth map. In addition, optimization processing is performed on the three-dimensional depth map to generate an optimized three-dimensional depth map. The depth information corresponding to the target object is judged based on the optimized three-dimensional depth map, and the focus position of the target object is obtained according to the depth information. Then, the auto-focus device is driven to execute an auto-focus program according to the focus position.
本发明的一种自动对焦装置,包括第一与第二图像传感器、对焦模块以及处理单元。第一与第二图像传感器拍摄目标物以产生第一图像与第二图像。对焦模块控制第一与第二图像传感器的对焦位置。处理单元,耦接第一与第二图像传感器以及对焦模块。处理单元对第一图像与第二图像进行三维深度估测,以产生三维深度图,并对三维深度图进行优化处理,以产生优化三维深度图。处理单元依据优化三维深度图判断目标物对应的深度信息,并依据深度信息取得关于目标物的对焦位置,对焦模块依据对焦位置执行自动对焦程序。An automatic focusing device of the present invention includes first and second image sensors, a focusing module and a processing unit. The first and second image sensors photograph the target to generate a first image and a second image. The focus module controls focus positions of the first and second image sensors. The processing unit is coupled to the first and second image sensors and the focusing module. The processing unit performs 3D depth estimation on the first image and the second image to generate a 3D depth map, and optimizes the 3D depth map to generate an optimized 3D depth map. The processing unit judges the depth information corresponding to the target object according to the optimized three-dimensional depth map, and obtains the focus position of the target object according to the depth information, and the focus module executes the auto-focus procedure according to the focus position.
在本发明的一实施例中,上述的依据深度信息取得关于目标物的对焦位置的步骤包括:依据深度信息查询深度对照表,以取得关于目标物的对焦位置。In an embodiment of the present invention, the step of obtaining the focus position of the target according to the depth information includes: querying a depth comparison table according to the depth information to obtain the focus position of the target.
在本发明的一实施例中,上述的选取目标物的方法包括:通过自动对焦装置接收使用者用以选取目标物的点选信号或由自动对焦装置进行目标检测程序,以自动选取目标物,并取得目标物的坐标位置。In an embodiment of the present invention, the above-mentioned method for selecting a target includes: receiving a click signal from a user for selecting a target through an auto-focus device or performing a target detection procedure by the auto-focus device to automatically select a target, And obtain the coordinate position of the target object.
在本发明的一实施例中,上述的依据优化三维深度图像判断目标物对应的深度信息并取得对焦位置的步骤包括:选取涵括目标物的区块,并读取区块中的多个邻域像素(neighborhood pixels)的深度信息,对这些邻域像素的深度信息进行统计运算,以获得目标物的优化深度信息。并且,依据优化深度信息取得关于目标物的对焦位置。In an embodiment of the present invention, the above-mentioned step of judging the depth information corresponding to the target object based on the optimized 3D depth image and obtaining the focus position includes: selecting a block including the target object, and reading a plurality of neighbors in the block The depth information of the neighborhood pixels is statistically calculated on the depth information of these neighborhood pixels to obtain the optimized depth information of the target object. Moreover, the focus position of the target object is obtained according to the optimized depth information.
在本发明的一实施例中,上述的自动对焦方法还包括:对目标物执行物体追踪程序,以取得关于目标物的至少一特征信息以及运动轨迹,其中特征信息包括重心、色彩、面积、轮廓或形状信息。In an embodiment of the present invention, the above-mentioned autofocus method further includes: executing an object tracking program on the target to obtain at least one feature information and motion trajectory of the target, wherein the feature information includes center of gravity, color, area, and contour or shape information.
在本发明的一实施例中,上述的自动对焦方法还包括:将目标物在不同时间点下的各深度信息储存于深度信息数据库。并且,依据深度信息数据库中的这些深度信息执行移动量估测,以取得关于目标物的深度变化趋势。In an embodiment of the present invention, the above-mentioned auto-focusing method further includes: storing depth information of the target object at different time points in a depth information database. And, the movement amount estimation is performed according to the depth information in the depth information database, so as to obtain the depth variation trend of the target object.
在本发明的一实施例中,上述的优化处理为高斯(Gaussian)平滑处理。In an embodiment of the present invention, the above optimization process is Gaussian smoothing process.
在本发明的一实施例中,上述的自动对焦装置还包括储存单元。储存单元耦接处理单元,用以储存第一与第二图像以及深度对照表。处理单元并依据深度信息查询深度对照表,以取得关于目标物的对焦位置。In an embodiment of the present invention, the above-mentioned auto-focus device further includes a storage unit. The storage unit is coupled to the processing unit for storing the first and second images and the depth comparison table. The processing unit queries the depth comparison table according to the depth information to obtain the focus position of the target object.
在本发明的一实施例中,上述的处理单元还包括区块深度估测器。区块深度估测器选取涵括目标物的区块,读取区块中的多个邻域像素的深度信息,对这些邻域像素的深度信息进行统计运算,以获得目标物的优化深度信息,并依据优化深度信息取得关于目标物的对焦位置。In an embodiment of the present invention, the above processing unit further includes a block depth estimator. The block depth estimator selects the block containing the target object, reads the depth information of multiple neighboring pixels in the block, and performs statistical calculations on the depth information of these neighboring pixels to obtain the optimized depth information of the target object , and obtain the focus position of the target object according to the optimized depth information.
在本发明的一实施例中,上述的处理单元还包括物体追踪模块。物体追踪模块耦接区块深度估测器,追踪目标物以取得至少一特征信息以及运动轨迹,其中特征信息包括重心、色彩、面积、轮廓或形状信息,以使区块深度估测器依据至少一特征信息与这些邻域像素的深度信息进行统计运算。In an embodiment of the present invention, the above processing unit further includes an object tracking module. The object tracking module is coupled to the block depth estimator, and tracks the object to obtain at least one feature information and motion trajectory, wherein the feature information includes center of gravity, color, area, outline or shape information, so that the block depth estimator can use at least one A statistical operation is performed on the feature information and the depth information of these neighboring pixels.
在本发明的一实施例中,上述的储存单元还包括深度信息数据库,且处理单元还包括移动量预测模块。深度信息数据库用以储存目标物在不同时间点下的各深度信息。移动量预测模块耦接储存单元与对焦模块,并依据深度信息数据库中的这些深度信息执行移动量预测,以取得关于目标物的深度变化趋势,以使对焦模块控制第一与第二图像传感器依据深度变化趋势进行平滑移动。In an embodiment of the present invention, the above-mentioned storage unit further includes a depth information database, and the processing unit further includes a movement amount prediction module. The depth information database is used to store the depth information of the target object at different time points. The movement amount prediction module is coupled to the storage unit and the focus module, and performs movement amount prediction according to the depth information in the depth information database, so as to obtain the depth change trend of the target object, so that the focus module controls the first and second image sensors according to Smooth movement of the depth change trend.
基于上述,本发明所提供的自动对焦方法以及自动对焦装置通过应用立体视觉的图像处理技术,并对其产生的三维深度图再进行优化处理以取得对焦位置,仅需一幅图像的时间即可完成相关自动对焦步骤的执行。因此,本发明的自动对焦装置以及自动对焦方法具有快速的对焦速度。此外,由于无须进行搜寻,因此也不会造成画面呼吸的现象,而具有良好的稳定性。Based on the above, the autofocus method and autofocus device provided by the present invention apply the image processing technology of stereo vision and optimize the three-dimensional depth map generated by it to obtain the focus position, which only takes one image. Complete the execution of the relevant autofocus steps. Therefore, the auto-focus device and the auto-focus method of the present invention have a fast focusing speed. In addition, since there is no need to search, it will not cause the picture to breathe, and has good stability.
为让本发明的上述特征和优点能更明显易懂,下文特举实施例,并配合附图作详细说明如下。In order to make the above-mentioned features and advantages of the present invention more comprehensible, the following specific embodiments are described in detail with reference to the accompanying drawings.
附图说明Description of drawings
图1是本发明一实施例所示出的一种自动对焦装置的方块图;Fig. 1 is a block diagram of an automatic focusing device shown in an embodiment of the present invention;
图2是本发明一实施例所示出的一种自动对焦方法的流程图;Fig. 2 is a flow chart of an autofocus method shown in an embodiment of the present invention;
图3是图1实施例中的一种储存单元与处理单元的方块图;Fig. 3 is a block diagram of a storage unit and a processing unit in the embodiment of Fig. 1;
图4是本发明另一实施例所示出的一种自动对焦方法的流程图;FIG. 4 is a flow chart of an autofocus method shown in another embodiment of the present invention;
图5是图4实施例中的一种判断目标物优化深度信息的步骤流程图;Fig. 5 is a flow chart of steps for judging the optimal depth information of an object in the embodiment of Fig. 4;
图6是本发明再一实施例所示出的一种自动对焦方法的流程图。Fig. 6 is a flow chart of an autofocus method shown in yet another embodiment of the present invention.
附图标记说明:Explanation of reference signs:
100:自动对焦装置;100: automatic focus device;
110:第一图像传感器;110: a first image sensor;
120:第二图像传感器;120: the second image sensor;
130:对焦模块;130: focusing module;
140:储存单元;140: storage unit;
141:深度信息数据库;141: depth information database;
150:处理单元;150: processing unit;
151:区块深度估测器;151: block depth estimator;
153:物体追踪模块;153: object tracking module;
155:移动量预测模块;155: movement amount prediction module;
S110、S120、S130、S140、S150、S151、S152、S160、S410、S610、S620:步骤。S110, S120, S130, S140, S150, S151, S152, S160, S410, S610, S620: steps.
具体实施方式detailed description
图1是本发明一实施例所示出的一种自动对焦装置的方块图。请参照图1,本实施例的自动对焦装置100包括第一图像传感器110与第二图像传感器120、对焦模块130、储存单元140以及处理单元150。在本实施例中,自动对焦装置100例如是数码相机、数码摄像机(Digital Video Camcorder,DVC)或是其他可用以摄像或摄影功能的手持电子装置等,但本发明并不限制其范围。FIG. 1 is a block diagram of an automatic focusing device according to an embodiment of the present invention. Referring to FIG. 1 , the autofocus device 100 of this embodiment includes a first image sensor 110 and a second image sensor 120 , a focusing module 130 , a storage unit 140 and a processing unit 150 . In this embodiment, the auto-focus device 100 is, for example, a digital camera, a digital video camera (Digital Video Camcorder, DVC) or other handheld electronic devices that can be used for video or photography functions, but the scope of the invention is not limited thereto.
请参照图1,在本实施例中,第一与第二图像传感器110、120可包括镜头、感光元件或光圈等构件,用以获取图像。此外,对焦模块130、储存单元140以及处理单元150可为硬件及/或软件所实现的功能模块,其中硬件可包括中央处理器、芯片组、微处理器等具有图像运算处理功能的硬件设备或上述硬件设备的组合,而软件则可以是操作系统、驱动程序等。在本实施例中,处理单元150耦接第一与第二图像传感器110、120、对焦模块130以及储存单元140,而可用以控制第一与第二图像传感器110、120与对焦模块130,并于储存单元140储存相关信息。以下将搭配图2,针对本实施例的自动对焦装置100各模块的功能进行详细说明。Referring to FIG. 1 , in this embodiment, the first and second image sensors 110 and 120 may include components such as lenses, photosensitive elements, or apertures to acquire images. In addition, the focusing module 130, the storage unit 140, and the processing unit 150 may be functional modules implemented by hardware and/or software, wherein the hardware may include a central processing unit, a chipset, a microprocessor, and other hardware devices with image processing functions or The combination of the above-mentioned hardware devices, and the software can be the operating system, drivers, etc. In this embodiment, the processing unit 150 is coupled to the first and second image sensors 110, 120, the focusing module 130 and the storage unit 140, and can be used to control the first and second image sensors 110, 120 and the focusing module 130, and The relevant information is stored in the storage unit 140 . The functions of each module of the autofocus device 100 of this embodiment will be described in detail below with reference to FIG. 2 .
图2是本发明一实施例所示出的一种自动对焦方法的流程图。请参照图2,在本实施例中,自动对焦方法例如可利用图1中的自动对焦装置100来执行。以下并搭配自动对焦装置100中的各模块以对本实施例的自动对焦方法的详细步骤进行进一步的描述。FIG. 2 is a flow chart of an autofocus method shown in an embodiment of the present invention. Referring to FIG. 2 , in this embodiment, the auto-focus method can be implemented by using the auto-focus device 100 in FIG. 1 , for example. The detailed steps of the auto-focus method of this embodiment will be further described below together with each module in the auto-focus device 100 .
首先,执行步骤S110,选取目标物。具体而言,在本实施例中,选取目标物的方法例如可通过自动对焦装置100接收使用者用以选取目标物的点选信号,以选取目标物。举例而言,使用者可以触控方式或移动取像装置到特定区域进行目标物的选取,但本发明不以此为限。在其他可行的实施例中,选取目标物的方法亦可由自动对焦装置100进行目标检测程序,以自动选取目标物,并取得目标物的坐标位置。举例而言,自动对焦装置100可通过使用人脸检测(face detection)、微笑检测或主体检测技术等来进行目标物的自动选择,并取得其坐标位置。但本发明亦不以此为限,此技术领域中具有通常知识者当可依据实际需求来设计自动对焦装置100中可用以选取目标物的模式,在此不予赘述。Firstly, step S110 is executed to select a target object. Specifically, in the present embodiment, the method for selecting the target may, for example, receive a click signal from the user for selecting the target through the auto-focus device 100 , so as to select the target. For example, the user can touch or move the imaging device to a specific area to select the target, but the invention is not limited thereto. In other feasible embodiments, the method of selecting the target can also be performed by the auto-focus device 100 to perform a target detection process to automatically select the target and obtain the coordinate position of the target. For example, the auto-focus device 100 can automatically select the target by using face detection, smile detection, or subject detection technology, and obtain its coordinate position. However, the present invention is not limited thereto, and those skilled in the art can design the modes for selecting objects in the auto-focus device 100 according to actual needs, and details will not be repeated here.
接着,执行步骤S120,利用第一图像传感器110以及第二图像传感器120拍摄目标物,以分别产生第一图像与第二图像。举例来说,第一图像例如为左眼图像,第二图像例如为右眼图像。在本实施例中,第一图像与第二图像可储存于储存单元140中,以供后续步骤使用。Next, step S120 is executed, using the first image sensor 110 and the second image sensor 120 to photograph the target object, so as to generate a first image and a second image respectively. For example, the first image is, for example, the image for the left eye, and the second image is, for example, the image for the right eye. In this embodiment, the first image and the second image can be stored in the storage unit 140 for use in subsequent steps.
接着,执行步骤S130,处理单元150依据第一图像与第二图像进行三维深度估测,以产生三维深度图。具体而言,处理单元150可通过立体视觉技术进行图像处理,以求得目标物在空间中的三维坐标位置以及图像中各点的深度信息。并在得到各点的初步深度信息后,将所有深度信息汇整为一张三维深度图。Next, step S130 is executed, the processing unit 150 performs 3D depth estimation according to the first image and the second image to generate a 3D depth map. Specifically, the processing unit 150 may perform image processing through stereo vision technology to obtain the three-dimensional coordinate position of the object in space and the depth information of each point in the image. And after obtaining the preliminary depth information of each point, all the depth information is integrated into a three-dimensional depth map.
接下来再执行步骤S140,处理单元150对此张三维深度图进行优化处理,以产生优化三维深度图。具体而言,在本实施例中,进行优化处理的方法例如是利用图像处理技术将各点的深度信息与其邻近的深度信息进行加权处理。举例而言,在本实施例中,优化处理可为高斯(Gaussian)平滑处理。简言之,在高斯平滑处理中,各点像素值是由周围相邻像素值的加权平均,而原始像素的值有最大的高斯分布值,所以有最大的权重,相邻像素随着距离原始像素越来越远,其权重也越来越小。因此,处理单元150在对三维深度图进行了高斯平滑处理后,图像各点的深度信息将可较为连续,并同时可保留了边缘的深度信息。如此一来,除可避免原先的三维深度图中记载的各点深度信息可能存在深度不精准或不连续的问题外,对于原先存在于三维深度图上的破洞也可利用其邻近区域的深度信息进行修补以达到完整的状况。但值得注意的是,虽上述优化处理方法是以高斯(Gaussian)平滑处理为例示说明,但本发明不以此为限。在其他可行的实施例中,此技术领域中具有通常知识者当可依据实际需求来选择其他适当的统计运算方法以执行优化处理,此处便不再赘述。Next, step S140 is executed, and the processing unit 150 optimizes the 3D depth map to generate an optimized 3D depth map. Specifically, in this embodiment, the method for performing optimization processing is, for example, to use image processing technology to perform weighting processing on the depth information of each point and its adjacent depth information. For example, in this embodiment, the optimization process may be Gaussian smoothing process. In short, in Gaussian smoothing, the pixel value of each point is the weighted average of the surrounding adjacent pixel values, and the value of the original pixel has the largest Gaussian distribution value, so it has the largest weight, and the adjacent pixels increase with the distance from the original As pixels get farther away, their weights get smaller and smaller. Therefore, after the processing unit 150 performs Gaussian smoothing on the three-dimensional depth map, the depth information of each point of the image will be relatively continuous, and at the same time, the depth information of the edge can be preserved. In this way, in addition to avoiding the problem that the depth information of each point recorded in the original 3D depth map may be inaccurate or discontinuous, the depth of the adjacent area can also be used for the holes that originally existed on the 3D depth map. Information is patched to achieve a complete condition. However, it should be noted that although the above-mentioned optimization processing method is described using Gaussian smoothing processing as an example, the present invention is not limited thereto. In other feasible embodiments, those with ordinary knowledge in this technical field may select other appropriate statistical calculation methods to perform the optimization process according to actual needs, so details will not be repeated here.
接着,再执行步骤S150,处理单元150依据优化三维深度图判断目标物对应的深度信息,并依据深度信息取得关于目标物的对焦位置。具体而言,依据深度信息取得关于目标物的对焦位置的步骤例如是依据深度信息查询深度对照表来取得关于目标物的对焦位置。举例而言,执行自动对焦程序的过程可以是通过对焦模块130控制自动对焦装置100中的步进马达步数(step)或音圈马达电流值以分别调整第一与第二图像传感器110、120的变焦镜头至所需的对焦位置,以进行对焦。因此,将可通过事前步进马达或音圈马达的校正过程,事先求得步进马达的步数或音圈马达的电流值与目标物清晰深度的对应关系,将其结果汇整为深度对照表,并储存于储存单元140中。如此一来,则可依据目前获得的目标物的深度信息查询到此深度信息所对应的步进马达的步数或音圈马达的电流值,并据此取得关于目标物的对焦位置信息。Next, step S150 is executed again, the processing unit 150 judges the depth information corresponding to the target object according to the optimized 3D depth map, and obtains the focus position of the target object according to the depth information. Specifically, the step of obtaining the focus position of the target according to the depth information is, for example, querying a depth comparison table according to the depth information to obtain the focus position of the target. For example, the process of executing the auto-focus program may be to control the stepping motor steps (step) or the current value of the voice coil motor in the auto-focus device 100 through the focus module 130 to adjust the first and second image sensors 110, 120 respectively. zoom lens to the desired focus position to focus. Therefore, through the calibration process of the stepping motor or voice coil motor in advance, the corresponding relationship between the number of steps of the stepping motor or the current value of the voice coil motor and the clear depth of the target can be obtained in advance, and the results can be compiled into a depth comparison Table, and stored in the storage unit 140. In this way, the number of steps of the stepping motor or the current value of the voice coil motor corresponding to the depth information can be queried based on the currently obtained depth information of the target, and the focus position information of the target can be obtained accordingly.
接着,执行步骤S160,处理单元150驱动自动对焦装置100根据对焦位置执行自动对焦程序。具体而言,由于对焦模块130控制第一与第二图像传感器110、120的对焦位置,因此在取得关于目标物的对焦位置信息后,处理单元150就可驱动自动对焦装置100的对焦模块130,并藉此调整第一与第二图像传感器110、120的变焦镜头至对焦位置,以完成自动对焦。Next, step S160 is executed, and the processing unit 150 drives the auto-focus device 100 to execute an auto-focus procedure according to the focus position. Specifically, since the focus module 130 controls the focus positions of the first and second image sensors 110, 120, the processing unit 150 can drive the focus module 130 of the auto-focus device 100 after obtaining the focus position information about the target object, And thereby adjust the zoom lenses of the first and second image sensors 110 and 120 to focus positions to complete autofocus.
如此一来,通过上述应用立体视觉的图像处理技术而产生三维深度图,并再对此三维深度图进行优化处理以取得对焦位置的方法,将使得本实施例的自动对焦装置100以及自动对焦方法仅需一幅图像的时间即可完成相关自动对焦步骤的执行。因此,本实施例的自动对焦装置100以及自动对焦方法具有快速的对焦速度。此外,由于无须进行搜寻,因此也不会造成画面呼吸的现象,而具有良好的稳定性。In this way, the method of generating a three-dimensional depth map by applying the above-mentioned image processing technology of stereo vision, and then optimizing the three-dimensional depth map to obtain the focus position will make the auto-focus device 100 and the auto-focus method of this embodiment Only one image time is required to complete the execution of the relevant autofocus steps. Therefore, the auto-focus device 100 and the auto-focus method of this embodiment have a fast focusing speed. In addition, since there is no need to search, it will not cause the picture to breathe, and has good stability.
图3是图1实施例中的一种处理单元与储存单元的方块图。请参照图3,更详细而言,本实施例的自动对焦装置100的储存单元140还包括深度信息数据库141,而处理单元150还包括区块深度估测器151、物体追踪模块153与移动量预测模块155。在本实施例中,区块深度估测器151、物体追踪模块153与移动量预测模块155可为硬件及/或软件所实现的功能模块,其中硬件可包括中央处理器、芯片组、微处理器等具有图像运算处理功能的硬件设备或上述硬件设备的组合,而软件则可以是操作系统、驱动程序等。以下将搭配图4至图6,针对本实施例的区块深度估测器151、物体追踪模块153、移动量预测模块155与深度信息数据库141的功能进行详细说明。FIG. 3 is a block diagram of a processing unit and a storage unit in the embodiment of FIG. 1 . Please refer to FIG. 3 , in more detail, the storage unit 140 of the autofocus device 100 of this embodiment also includes a depth information database 141, and the processing unit 150 also includes a block depth estimator 151, an object tracking module 153 and a moving amount prediction module 155 . In this embodiment, the block depth estimator 151, the object tracking module 153 and the movement amount prediction module 155 may be functional modules implemented by hardware and/or software, wherein the hardware may include a central processing unit, a chipset, a microprocessor The hardware device with image computing and processing functions such as a device or a combination of the above hardware devices, and the software can be an operating system, a driver, and the like. The functions of the block depth estimator 151 , the object tracking module 153 , the movement amount prediction module 155 and the depth information database 141 of this embodiment will be described in detail below with reference to FIGS. 4 to 6 .
图4是本发明另一实施例所示出的一种自动对焦方法的流程图。请参照图4,在本实施例中,自动对焦方法例如可利用图1中的自动对焦装置100与图3中的处理单元150来执行。本实施例的自动对焦方法与图2实施例中的自动对焦方法类似,以下仅针对两者不同之处进行说明。Fig. 4 is a flow chart of an auto-focus method according to another embodiment of the present invention. Please refer to FIG. 4 , in this embodiment, the auto-focus method can be implemented by using the auto-focus device 100 in FIG. 1 and the processing unit 150 in FIG. 3 , for example. The autofocus method in this embodiment is similar to the autofocus method in the embodiment in FIG. 2 , and only the differences between the two will be described below.
图5是图4实施例中的一种判断目标物优化深度信息的步骤流程图。图4所示的步骤S150,依据优化三维深度图判断目标物对应的深度信息,并依据深度信息取得关于目标物的对焦位置,还包括子步骤S151以及S152。请参照图5,首先,执行步骤S151,利用区块深度估测器151选取涵括目标物的区块,并读取区块中的多个邻域像素的深度信息,对这些邻域像素的深度信息进行统计运算,以获得目标物的优化深度信息。具体而言,进行此统计运算的目的是为了能够更可靠地计算出目标物的有效深度信息,如此一来,将可藉此避免对焦到不正确的目标的可能性。FIG. 5 is a flow chart of steps for judging the optimal depth information of an object in the embodiment of FIG. 4 . In step S150 shown in FIG. 4 , the depth information corresponding to the target is judged according to the optimized 3D depth map, and the focus position of the target is obtained according to the depth information, and sub-steps S151 and S152 are also included. Please refer to FIG. 5 , first, step S151 is executed, and the block depth estimator 151 is used to select a block including the target object, and read the depth information of a plurality of neighboring pixels in the block, and the depth information of these neighboring pixels The depth information is statistically calculated to obtain the optimized depth information of the target. Specifically, the purpose of performing this statistical calculation is to more reliably calculate the effective depth information of the target, so that the possibility of focusing on an incorrect target can be avoided.
举例而言,执行此统计运算的方法例如可为平均运算(mean)、众数运算(mod)、中值运算(median)、最小值运算(minimum)、四分位数(quartile)或其它适合的数学统计运算方式。更详细而言,平均运算指的是以此区块的平均深度信息来做为执行后续自动对焦步骤的优化深度信息,众数运算则是以此区块中数量最多的深度信息作为优化深度信息,中值运算则是以此区块中的深度信息中值作为优化深度信息,最小值运算则是以此区块中的最近物体距离作为优化深度信息,四分位数运算则是以此区块中的深度信息的第一四分位数或第二四分位数作为优化深度信息。值得注意的是,本发明不以此为限,此技术领域中具有通常知识者当可依据实际需求来选择其他适当的统计运算方法以获得目标物的优化深度信息,此处便不再赘述。For example, the method of performing this statistical operation can be average operation (mean), mode operation (mod), median operation (median), minimum operation (minimum), quartile (quartile) or other suitable Mathematical statistics operation method. In more detail, the average operation refers to using the average depth information of this block as the optimized depth information for the subsequent auto-focus steps, and the majority operation refers to using the depth information with the largest number in this block as the optimized depth information , the median operation is to use the median value of the depth information in this block as the optimized depth information, the minimum value operation is to use the distance of the nearest object in this block as the optimized depth information, and the quartile operation is to use this area The first quartile or the second quartile of the depth information in the block is used as optimized depth information. It is worth noting that the present invention is not limited thereto, and those skilled in the art may select other appropriate statistical calculation methods to obtain the optimized depth information of the target object according to actual needs, so details will not be repeated here.
接着,再执行步骤S152,依据优化深度信息取得关于目标物的对焦位置,在本实施例中,执行步骤S152的方法已于图2实施例中的步骤S150的方法中详述,在此不再重述。Next, step S152 is executed to obtain the focus position of the target object according to the optimized depth information. In this embodiment, the method for executing step S152 has been described in detail in the method of step S150 in the embodiment of FIG. 2 , and will not be repeated here. restate.
此外,请再次参照图4,在本实施例中,自动对焦方法还包括执行步骤S410,利用物体追踪模块153对目标物执行物体追踪程序,以取得关于目标物的至少一特征信息以及运动轨迹。具体而言,目标物的特征信息可包括重心、色彩、面积、轮廓或形状信息,物体追踪模块153并可利用不同的物体追踪演算法去获取第一与第二图像中形成目标物的各种成分,再将这些成分集合成较高阶的特征信息,通过比对不同时间点下所产生的连续的第一图像或第二图像间的特征信息来追踪目标物。值得注意的是,本发明并不限定物体追踪演算法的范围,此技术领域中具有通常知识者当可依据实际需求来选择适当的物体追踪演算法以获得目标物的特征信息及运动轨迹,此处便不再赘述。此外,物体追踪模块153还耦接区块深度估测器151,而可反馈其特征信息以及运动轨迹至区块深度估测器151中。区块深度估测器151并可依据目标物的特征信息及追踪估测的像素可信度(相似度)与其邻域像素的深度信息再进行不同加权类型的统计运算,以使目标物的优化深度信息更为精确。In addition, please refer to FIG. 4 again. In this embodiment, the auto-focus method further includes executing step S410 , using the object tracking module 153 to execute an object tracking program on the target to obtain at least one characteristic information and motion track of the target. Specifically, the feature information of the target object may include center of gravity, color, area, outline or shape information, and the object tracking module 153 may use different object tracking algorithms to obtain various objects forming the target object in the first and second images. Components, and these components are combined into higher-order feature information, and the target is tracked by comparing the feature information between consecutive first images or second images generated at different time points. It is worth noting that the present invention does not limit the scope of the object tracking algorithm. Those with ordinary knowledge in this technical field can select an appropriate object tracking algorithm according to actual needs to obtain the characteristic information and motion trajectory of the target object. I will not repeat them here. In addition, the object tracking module 153 is also coupled to the block depth estimator 151 , and can feed back its feature information and motion trajectory to the block depth estimator 151 . The block depth estimator 151 can perform statistical calculations of different weighting types according to the feature information of the target object and the pixel reliability (similarity) of the tracking estimation and the depth information of its neighboring pixels, so as to optimize the target object Depth information is more precise.
图6是本发明再一实施例所示出的一种自动对焦方法的流程图。请参照图6,在本实施例中,自动对焦方法例如可利用图1中的自动对焦装置100与图3中的处理单元150来执行。本实施例的自动对焦方法与图4实施例中的自动对焦方法类似,以下仅针对两者不同之处进行说明。Fig. 6 is a flow chart of an autofocus method shown in yet another embodiment of the present invention. Referring to FIG. 6 , in this embodiment, the auto-focus method can be implemented by using the auto-focus device 100 in FIG. 1 and the processing unit 150 in FIG. 3 , for example. The autofocus method in this embodiment is similar to the autofocus method in the embodiment in FIG. 4 , and only the differences between the two will be described below.
在本实施例中,自动对焦方法还包括执行步骤S610与步骤S620。在步骤S610中,可利用储存单元140与处理单元150将目标物在不同时间点下的各深度信息储存于深度信息数据库141(如图3所示出)。具体而言,在自动对焦装置执行步骤S150时,将可不断得到目标物在空间中移动的三维位置信息,因此处理单元150可将目标物在不同时间点下的各深度信息输入并储存至储存单元140的深度信息数据库141。In this embodiment, the auto-focus method further includes performing step S610 and step S620. In step S610 , the storage unit 140 and the processing unit 150 can be used to store the depth information of the object at different time points in the depth information database 141 (as shown in FIG. 3 ). Specifically, when the autofocus device executes step S150, the three-dimensional position information of the object moving in space can be continuously obtained, so the processing unit 150 can input and store the depth information of the object at different time points to the storage unit 140's depth information database 141.
接着,执行步骤S620,利用移动量预测模块155依据深度信息数据库141中的这些深度信息执行移动量估测,以取得关于目标物的深度变化趋势。具体而言,移动量预测模块155耦接储存单元140与对焦模块130,当移动量预测模块155对深度信息数据库141的这些深度信息执行移动量估测的计算程序时,将可分别取得目标物在空间中移动的三维位置信息变化趋势,特别是关于目标物沿着Z轴的位置变化趋势,亦即目标物的深度变化趋势,将有助于进行目标物在下一瞬间的移动位置预测,并因此有助于自动对焦。更进一步而言,在取得关于目标物的深度变化趋势后,可将此一目标物的深度变化趋势传递至对焦模块130中,并使对焦模块130控制第一与第二图像传感器110、120依据深度变化趋势进行平滑移动。更详细而言,自动对焦装置100在对焦模块130执行自动对焦程序之前将可根据此一目标物的深度变化趋势预先调整第一与第二图像传感器110、120的镜头位置,而使第一与第二图像传感器110、120的镜头位置可较为接近在步骤S150中所取得的对焦位置。如此一来,在自动对焦装置100执行步骤S160的自动对焦程序时的移动过程,将可较为平滑,进而增加自动对焦装置100的稳定性。Next, step S620 is executed, using the movement prediction module 155 to perform movement estimation according to the depth information in the depth information database 141 , so as to obtain the depth variation trend of the target object. Specifically, the movement amount prediction module 155 is coupled to the storage unit 140 and the focusing module 130. When the movement amount prediction module 155 executes the calculation program of the movement amount estimation on the depth information of the depth information database 141, the target objects can be respectively obtained. The change trend of three-dimensional position information moving in space, especially the position change trend of the target along the Z axis, that is, the depth change trend of the target, will help to predict the moving position of the target in the next instant, and therefore Helps with autofocus. Furthermore, after obtaining the depth change trend of the target object, the depth change trend of the target object can be transmitted to the focus module 130, and the focus module 130 can control the first and second image sensors 110, 120 according to Smooth movement of the depth change trend. More specifically, the auto-focus device 100 can pre-adjust the lens positions of the first and second image sensors 110 and 120 according to the depth variation trend of the target before the focusing module 130 executes the auto-focus procedure, so that the first and second image sensors 110 and 120 The lens positions of the second image sensors 110 and 120 may be closer to the focus positions obtained in step S150. In this way, the moving process when the auto-focus device 100 executes the auto-focus procedure in step S160 can be relatively smooth, thereby increasing the stability of the auto-focus device 100 .
此外,深度信息数据库141与移动量预测模块155亦皆可分别反馈此目标物在不同时间点下的各深度信息与其深度变化趋势至物体追踪模块153中,物体追踪模块153并可根据目标物的这些深度信息与其深度变化趋势,再进行特征信号及深度信息的计算与分析,如此一来,将可降低系统的运算量,提升运算速度,并使得物体追踪的结果更为准确,亦可提升自动对焦装置100的对焦效能。In addition, both the depth information database 141 and the movement amount prediction module 155 can also respectively feed back the depth information of the target object at different time points and its depth variation trend to the object tracking module 153, and the object tracking module 153 can also be based on the target object. The depth information and its depth change trend are then calculated and analyzed for characteristic signals and depth information. In this way, the calculation amount of the system can be reduced, the calculation speed can be improved, and the result of object tracking can be made more accurate. Focusing performance of the focusing device 100 .
综上所述,本发明的实施例的自动对焦方法以及自动对焦装置通过应用立体视觉的图像处理技术,并对其产生的三维深度图再进行优化处理以取得对焦位置,仅需一幅图像的时间即可完成相关自动对焦程序的执行。因此,本发明的自动对焦装置以及自动对焦方法具有快速的对焦速度。此外,由于无须进行反复搜寻,因此也不会造成呼吸现象的产生,而具有良好的对焦稳定性。To sum up, the auto-focus method and auto-focus device of the embodiment of the present invention apply the image processing technology of stereo vision and optimize the three-dimensional depth map generated by it to obtain the focus position, only one image is needed. Time can complete the execution of the relevant auto-focus procedures. Therefore, the auto-focus device and the auto-focus method of the present invention have a fast focusing speed. In addition, because there is no need to search repeatedly, it will not cause breathing phenomenon, and has good focusing stability.
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present invention, rather than limiting them; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: It is still possible to modify the technical solutions described in the foregoing embodiments, or perform equivalent replacements for some or all of the technical features; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the technical solutions of the various embodiments of the present invention. scope.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201310124420.6A CN104102068B (en) | 2013-04-11 | 2013-04-11 | Autofocus method and autofocus device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201310124420.6A CN104102068B (en) | 2013-04-11 | 2013-04-11 | Autofocus method and autofocus device |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN104102068A CN104102068A (en) | 2014-10-15 |
| CN104102068B true CN104102068B (en) | 2017-06-30 |
Family
ID=51670323
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201310124420.6A Expired - Fee Related CN104102068B (en) | 2013-04-11 | 2013-04-11 | Autofocus method and autofocus device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN104102068B (en) |
Families Citing this family (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104363378B (en) * | 2014-11-28 | 2018-01-16 | 广东欧珀移动通信有限公司 | camera focusing method, device and terminal |
| CN104363377B (en) * | 2014-11-28 | 2017-08-29 | 广东欧珀移动通信有限公司 | Display methods, device and the terminal of focus frame |
| CN104618643B (en) * | 2015-01-20 | 2017-10-17 | 广东欧珀移动通信有限公司 | A shooting method and device |
| CN106324945A (en) * | 2015-06-30 | 2017-01-11 | 中兴通讯股份有限公司 | Non-contact automatic focusing method and device |
| CN106713726B (en) * | 2015-07-14 | 2019-11-29 | 无锡天脉聚源传媒科技有限公司 | A kind of method and apparatus identifying style of shooting |
| CN106060521B (en) * | 2016-06-21 | 2019-04-16 | 英华达(上海)科技有限公司 | Deep image construction method and system |
| CN106210514B (en) * | 2016-07-04 | 2019-04-12 | Oppo广东移动通信有限公司 | Photographing focusing method and device and intelligent equipment |
| CN106161945A (en) * | 2016-08-01 | 2016-11-23 | 乐视控股(北京)有限公司 | Take pictures treating method and apparatus |
| CN106254855B (en) * | 2016-08-25 | 2017-12-05 | 锐马(福建)电气制造有限公司 | A kind of three-dimensional modeling method and system based on zoom ranging |
| CN107147848B (en) * | 2017-05-23 | 2023-08-25 | 杭州度康科技有限公司 | Automatic focusing method and real-time video acquisition system adopting same |
| CN107566731A (en) * | 2017-09-28 | 2018-01-09 | 努比亚技术有限公司 | A kind of focusing method and terminal, computer-readable storage medium |
| CN108702456A (en) * | 2017-11-30 | 2018-10-23 | 深圳市大疆创新科技有限公司 | A focusing method, device and readable storage medium |
| WO2020124517A1 (en) * | 2018-12-21 | 2020-06-25 | 深圳市大疆创新科技有限公司 | Photographing equipment control method, photographing equipment control device and photographing equipment |
| CN110378943A (en) * | 2019-06-21 | 2019-10-25 | 北京达佳互联信息技术有限公司 | Image processing method, device, electronic equipment and storage medium |
| CN119450210B (en) * | 2023-08-02 | 2026-01-09 | 荣耀终端股份有限公司 | Shooting methods, electronic devices, storage media and software products |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101630408A (en) * | 2009-08-14 | 2010-01-20 | 清华大学 | Depth map treatment method and device |
| CN102714741A (en) * | 2009-10-14 | 2012-10-03 | 汤姆森特许公司 | Filtering and edge encoding |
| CN102769749A (en) * | 2012-06-29 | 2012-11-07 | 宁波大学 | A Post-processing Method of Depth Image |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1085769B1 (en) * | 1999-09-15 | 2012-02-01 | Sharp Kabushiki Kaisha | Stereoscopic image pickup apparatus |
| CN102713513B (en) * | 2010-11-24 | 2015-08-12 | 松下电器产业株式会社 | Camera head, image capture method, program and integrated circuit |
| CN103019001B (en) * | 2011-09-22 | 2016-06-29 | 晨星软件研发(深圳)有限公司 | Atomatic focusing method and device |
-
2013
- 2013-04-11 CN CN201310124420.6A patent/CN104102068B/en not_active Expired - Fee Related
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101630408A (en) * | 2009-08-14 | 2010-01-20 | 清华大学 | Depth map treatment method and device |
| CN102714741A (en) * | 2009-10-14 | 2012-10-03 | 汤姆森特许公司 | Filtering and edge encoding |
| CN102769749A (en) * | 2012-06-29 | 2012-11-07 | 宁波大学 | A Post-processing Method of Depth Image |
Also Published As
| Publication number | Publication date |
|---|---|
| CN104102068A (en) | 2014-10-15 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN104102068B (en) | Autofocus method and autofocus device | |
| TWI471677B (en) | Auto focus method and auto focus apparatus | |
| KR102032882B1 (en) | Autofocus method, device and electronic apparatus | |
| US9998650B2 (en) | Image processing apparatus and image pickup apparatus for adding blur in an image according to depth map | |
| US20150201182A1 (en) | Auto focus method and auto focus apparatus | |
| US11956536B2 (en) | Methods and apparatus for defocus reduction using laser autofocus | |
| TWI515470B (en) | Multi-lens autofocus system and method thereof | |
| CN108496350A (en) | A focus processing method and device | |
| CN108702457B (en) | Method, apparatus and computer-readable storage medium for automatic image correction | |
| CN106031148B (en) | Imaging device, method for autofocusing in imaging device, and corresponding computer program | |
| US20140327743A1 (en) | Auto focus method and auto focus apparatus | |
| CN106154688B (en) | A method and device for automatic focusing | |
| EP3109695B1 (en) | Method and electronic device for automatically focusing on moving object | |
| CN104133339B (en) | Automatic focusing method and automatic focusing device | |
| TWI515471B (en) | Auto-focus system for multiple lens and method thereof | |
| EP3218756B1 (en) | Direction aware autofocus | |
| US9300861B2 (en) | Video recording apparatus and focusing method for the same | |
| CN105022138B (en) | Automatic focusing system using multiple lenses and method thereof | |
| CN105022137B (en) | Automatic focusing system using multiple lenses and method thereof | |
| US10051173B2 (en) | Image pick-up apparatus and progressive auto-focus method thereof | |
| TWI906985B (en) | Dual-lens electronic apparatus and image consistency improvement method thereof | |
| CN119697492A (en) | Focusing method, device, equipment and readable storage medium | |
| CN119071631A (en) | Automatic focusing method, device, image acquisition device and storage medium | |
| CN119854631A (en) | Focusing method and electronic device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170630 |
|
| CF01 | Termination of patent right due to non-payment of annual fee |