CN104683692A - Continuous shooting method and device - Google Patents
Continuous shooting method and device Download PDFInfo
- Publication number
- CN104683692A CN104683692A CN201510058180.3A CN201510058180A CN104683692A CN 104683692 A CN104683692 A CN 104683692A CN 201510058180 A CN201510058180 A CN 201510058180A CN 104683692 A CN104683692 A CN 104683692A
- Authority
- CN
- China
- Prior art keywords
- image
- continuous shooting
- image frames
- image frame
- acquired image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Studio Devices (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明涉及终端领域,尤其涉及一种连拍方法及装置。The present invention relates to the field of terminals, in particular to a continuous shooting method and device.
背景技术Background technique
随着终端技术的不断发展,终端集成的功能越来越多,逐渐成为人们生活中不可或缺的一部分。其中,终端提供的GIF(Graphic Interchange Format,图像互换格式)动态图模式很具趣味性,深受广大用户的喜爱。这种GIF动态图的原理为:在一个GIF文件中存储多幅图像,把存于一个GIF文件中的多幅图像数据逐幅读出并显示从而构成一种最简单的动画。实际使用过程中用户通常采用连拍模式抓拍具有连续动作或表情的图像帧来作为GIF动态图合成的原始素材,现有技术中,连拍模式通常是采用固定时间间隔采集图像帧作为连拍图像,为了使得抓拍的图像具有连续性,该固定时间间隔设置较短,但用户做表情或动作的速度往往跟不上摄像头进行连拍的速度,因此在连拍过程中会抓取到一些模糊的或者换动作表情过程中的无用图像帧,浪费终端的存储资源。With the continuous development of terminal technology, more and more functions are integrated in the terminal, gradually becoming an indispensable part of people's life. Among them, the GIF (Graphic Interchange Format, Image Interchange Format) dynamic graphic mode provided by the terminal is very interesting and is deeply loved by users. The principle of this GIF dynamic image is: store multiple images in a GIF file, and read out and display the multiple image data stored in a GIF file one by one to form the simplest animation. In actual use, users usually use continuous shooting mode to capture image frames with continuous actions or expressions as the original material for GIF dynamic image synthesis. In the prior art, continuous shooting mode usually uses fixed time intervals to collect image frames as continuous shooting images , in order to make the captured images continuous, the fixed time interval is set to be short, but the speed of the user's facial expressions or actions often cannot keep up with the speed of the continuous shooting of the camera, so some blurred images will be captured during the continuous shooting Or change useless image frames in the process of moving expressions, wasting storage resources of the terminal.
发明内容Contents of the invention
本发明实施例提供了一种连拍方法及装置,可节约终端的存储资源,提高连拍过程中抓取有用图像帧的准确度。Embodiments of the present invention provide a continuous shooting method and device, which can save storage resources of a terminal and improve the accuracy of capturing useful image frames during continuous shooting.
本发明实施例提供了一种连拍方法,所述连拍方法包括:An embodiment of the present invention provides a continuous shooting method, and the continuous shooting method includes:
接收连拍启动指令,获取摄像头采集的含有面部图像的起始图像帧,将所述起始图像帧作为参考图像并存储;Receive the start command of continuous shooting, obtain the initial image frame that contains the face image collected by the camera, and store the initial image frame as a reference image;
获取所述摄像头采集的含有面部图像的采集图像帧;Obtaining the collected image frame containing the facial image collected by the camera;
检测所述采集图像帧中的人脸特征信息与所述参考图像的人脸特征信息的差异值;Detecting a difference value between the face feature information in the captured image frame and the face feature information in the reference image;
在所述差异值大于或等于阈值时,将所述采集图像帧设置为参考图像并存储,执行获取所述摄像头采集的含有面部图像的采集图像帧的步骤。When the difference value is greater than or equal to a threshold, the captured image frame is set as a reference image and stored, and the step of acquiring the captured image frame containing the facial image captured by the camera is performed.
相应的,本发明实施例提供了一种连拍装置,所述装置包括:Correspondingly, an embodiment of the present invention provides a continuous shooting device, which includes:
图像帧获取单元,用于接收连拍启动指令,获取摄像头采集的含有面部图像的起始图像帧;The image frame acquisition unit is used to receive the continuous shooting start instruction, and acquire the initial image frame containing the facial image collected by the camera;
参考图像设置单元,用于将所述起始图像帧作为参考图像并存储;a reference image setting unit, configured to use the initial image frame as a reference image and store it;
所述图像帧获取单元,还用于获取所述摄像头采集的含有面部图像的采集图像帧;The image frame acquiring unit is also used to acquire the captured image frame containing the facial image captured by the camera;
人脸特征检测单元,用于检测所述采集图像帧中的人脸特征信息与所述参考图像的人脸特征信息的差异值;A face feature detection unit, configured to detect a difference value between the face feature information in the captured image frame and the face feature information of the reference image;
所述参考图像设置单元,还用于在所述差异值大于或等于阈值时,将所述采集图像帧设置为参考图像并存储。The reference image setting unit is further configured to set and store the acquired image frame as a reference image when the difference value is greater than or equal to a threshold.
本发明实施例,可接收连拍启动指令,获取摄像头采集的含有面部图像的起始图像帧,将所述起始图像帧作为参考图像并存储;还可获取所述摄像头采集的含有面部图像的采集图像帧;检测所述采集图像帧中的人脸特征信息与所述参考图像的人脸特征信息的差异值;进而在所述差异值大于阈值时,将所述采集图像帧设置为参考图像并存储,执行获取所述摄像头采集的含有面部图像的采集图像帧的步骤,可节约终端的存储资源,提高连拍过程中抓取有用图像帧的准确度,提高了用户体验。In the embodiment of the present invention, a continuous shooting start instruction can be received, the initial image frame containing the facial image collected by the camera can be obtained, and the initial image frame can be used as a reference image and stored; Collecting an image frame; detecting the difference value between the face feature information in the collected image frame and the face feature information of the reference image; and then setting the collected image frame as a reference image when the difference value is greater than a threshold and storing, and performing the step of acquiring the captured image frame containing the facial image captured by the camera can save storage resources of the terminal, improve the accuracy of capturing useful image frames in the continuous shooting process, and improve user experience.
附图说明Description of drawings
为了更清楚地说明本发明实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solutions of the embodiments of the present invention more clearly, the drawings that need to be used in the description of the embodiments will be briefly introduced below. Obviously, the drawings in the following description are some embodiments of the present invention. Ordinary technicians can also obtain other drawings based on these drawings on the premise of not paying creative work.
图1是本发明实施例提供的一种连拍方法流程示意图;FIG. 1 is a schematic flow chart of a continuous shooting method provided by an embodiment of the present invention;
图2是本发明实施例提供的另一种连拍方法流程示意图;Fig. 2 is a schematic flow chart of another continuous shooting method provided by an embodiment of the present invention;
图3是本发明实施例提供的一种连拍装置结构示意图;Fig. 3 is a schematic structural diagram of a continuous shooting device provided by an embodiment of the present invention;
图4是本发明实施例提供的图像帧获取单元结构示意图;4 is a schematic structural diagram of an image frame acquisition unit provided by an embodiment of the present invention;
图5是本发明实施例提供的人脸特征检测单元结构示意图。Fig. 5 is a schematic structural diagram of a face feature detection unit provided by an embodiment of the present invention.
具体实施方式detailed description
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are some of the embodiments of the present invention, but not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.
具体实现中,本发明实施例中所提及的终端包括但不限于:智能手机(如Android手机、IOS手机)、平板电脑、笔记本电脑、掌上电脑、数码相机等电子设备。本发明实施例所描述的方法是由存储在终端存储器中的一组代码执行,该代码执行于基于冯诺依曼体系的计算机系统中。In specific implementation, the terminals mentioned in the embodiments of the present invention include, but are not limited to: smart phones (such as Android phones, IOS phones), tablet computers, notebook computers, palmtop computers, digital cameras and other electronic devices. The method described in the embodiment of the present invention is executed by a set of codes stored in the memory of the terminal, and the codes are executed in a computer system based on the Von Neumann architecture.
下面将结合图1到图5对本发明实施例提供的一种连拍方法及装置进行具体描述。A continuous shooting method and device provided in the embodiments of the present invention will be described in detail below with reference to FIG. 1 to FIG. 5 .
参见图1,是本发明实施例提供的一种连拍方法流程示意图,如图所示的连拍方法可包括以下步骤:Referring to FIG. 1 , it is a schematic flow chart of a continuous shooting method provided by an embodiment of the present invention. The continuous shooting method as shown in the figure may include the following steps:
S101,接收连拍启动指令,获取摄像头采集的含有面部图像的起始图像帧,将所述起始图像帧作为参考图像并存储。S101. Receive a continuous shooting start instruction, acquire a starting image frame including a facial image collected by a camera, and store the starting image frame as a reference image.
具体实现中,可首先启动摄像头并在终端的设置菜单中选择连拍模式,摄像头进入取景模式后便进行自动对焦,用户在准备好姿势时输入拍照指令便视为输入连拍启动指令,其中,拍照指令的输入有多种方式,可以是用户点击拍照键触发的,也可以是终端在延时拍照模式下,终端在拍照延时到达时触发的。在本实施例中的连拍方法,其主要应用场景为采集包含人物面部图像的连拍图像,因此终端接收到连拍指令时,摄像头采集含有面部图像的起始图像帧,将所述起始图像帧作为参考图像并存储,作为连拍过程中拍摄到的第一张图像。In the specific implementation, you can first start the camera and select the continuous shooting mode in the setting menu of the terminal. After the camera enters the viewfinder mode, it will automatically focus. When the user enters the camera command when the posture is ready, it will be regarded as the continuous shooting start command. Among them, There are many ways to input the camera command. It can be triggered by the user clicking the camera button, or it can be triggered when the terminal is in the delayed camera mode and the camera delay arrives. In the continuous shooting method in this embodiment, its main application scenario is to collect continuous shooting images containing facial images of people. Therefore, when the terminal receives a continuous shooting instruction, the camera collects the initial image frame containing facial images, and the initial frame The image frame is used as a reference image and stored as the first image captured in the continuous shooting process.
S102,获取所述摄像头采集的含有面部图像的采集图像帧。S102. Acquire a captured image frame including a facial image captured by the camera.
在步骤S101中,将摄像头采集到的起始图像帧作为参考图像后,摄像头继续取景并进行自动对焦,以采集含有面部图像的采集图像帧,等待拍摄连拍过程中的其他图像。In step S101, after the initial image frame collected by the camera is used as a reference image, the camera continues to find a view and perform auto-focus to collect an image frame containing a facial image, waiting for other images in the continuous shooting process.
S103,检测所述采集图像帧中的人脸特征信息与所述参考图像的人脸特征信息的差异值。S103. Detect a difference value between the face feature information in the captured image frame and the face feature information in the reference image.
具体实现中,本步骤可基于人脸识别技术,首先对采集图像帧和参考图像进行人脸检测(在复杂的背景中判断是否存在面部图像并将其从背景中分割出来)以获取其包含的面部图像,然后对采集图像帧和参考图像中包含的面部图像进行采样以进行人脸特征信息提取,进而检测所述采集图像帧中的人脸特征信息与所述参考图像的人脸特征信息的差异值。人脸检测的方法包括但不限于:参考模板法、人脸规则法、样品学习法、肤色模型法或特征子脸法等。人脸特征信息的提取通常包括:特征向量法和面纹模板法。其中,人脸识别的特征向量法是指:确定眼虹膜、鼻翼、嘴角等面像五官轮廓在的大小、位置、距离等属性,然后计算面像五官轮廓中各特征点的几何特征量(欧氏距离、曲率、角度)或代数特征量(矩阵特征矢量)用以描述图像帧中面部图像的面部特征。In the specific implementation, this step can be based on the face recognition technology. Firstly, face detection is performed on the collected image frame and the reference image (judging whether there is a facial image in a complex background and segmenting it from the background) to obtain the information contained in it. Facial image, then sample the facial image contained in the captured image frame and the reference image to extract facial feature information, and then detect the difference between the facial feature information in the captured image frame and the facial feature information of the reference image difference value. Face detection methods include but are not limited to: reference template method, face rule method, sample learning method, skin color model method or feature sub-face method, etc. The extraction of face feature information usually includes: feature vector method and face pattern template method. Among them, the eigenvector method of face recognition refers to: determine the size, position, distance and other attributes of the contours of the facial features such as the iris, nose, and corners of the mouth, and then calculate the geometric features of each feature point in the contours of the facial features (European The distance, curvature, angle) or algebraic feature quantity (matrix feature vector) are used to describe the facial features of the facial image in the image frame.
在本实施例中,可获取采集图像帧和参考图像中面部图像的面部特征点(如嘴角、鼻翼、眼角、眼廓、鼻尖、眉头、眉尾)在图像帧中的坐标信息,然后分别检测采集图像帧中各面部特征点与参考图像中各相应的面部特征点的坐标差值,以判断采集图像帧中的面部图像是否有表情或者位置变化,例如眉头之间的距离的变化可判断人脸是否有皱眉头的表情,鼻尖与眼角之间的距离变化可判断人脸中是否有皱鼻子的表情。In this embodiment, the coordinate information of facial feature points (such as mouth corners, nose wings, eye corners, eye contours, nose tip, brow head, brow tail) in the image frame can be obtained, and then detected respectively. Collect the coordinate difference between each facial feature point in the image frame and each corresponding facial feature point in the reference image to judge whether the facial image in the collected image frame has an expression or a position change, for example, the change in the distance between the brows can judge the human Whether the face has a frowning expression, the distance between the tip of the nose and the corner of the eye can be used to judge whether there is a wrinkled nose expression on the face.
例如,设置人脸特征中嘴角特征点A点和B点的横纵坐标差异值的阈值为0.5,在对采集图像帧和参考图像做人脸检测和人脸特征信息提取后,得到采集图像帧中嘴角A点和B点的坐标分别为(0.5,1.5)和(2.5,1.5),参考图像中嘴角A点和B点的坐标分别为(1,1)和(2,1),因此可检测到采集图像帧和参考图像中A点和B点的横纵坐标差异值为分别为0.5,等于预设的阈值,此时执行步骤S104,将所述采集图像帧设置为参考图像并存储。其中,嘴角A点和B点的横坐标之差可代表嘴巴的长度,嘴角A点和B点的纵坐标之差可代表所述嘴角的高度差,可知相对于参考图像中面部图像的嘴巴,采集图像帧中面部图像中的嘴巴长度有所增加并且嘴角有所上扬,因此可判断相对于参考图像中面部图像中的嘴巴,采集图像帧中的嘴巴有变为笑容状态,相对于参考图像采集图像帧中的面部图像是有表情变化的。For example, set the threshold of the difference between the horizontal and vertical coordinates of the mouth corner feature points A and B in the face features to 0.5, after performing face detection and face feature information extraction on the collected image frame and reference image, the obtained image frame The coordinates of point A and point B of the corner of the mouth are (0.5,1.5) and (2.5,1.5) respectively, and the coordinates of point A and point B of the corner of the mouth in the reference image are (1,1) and (2,1) respectively, so it can be detected The difference values of the horizontal and vertical coordinates of point A and point B in the captured image frame and the reference image are respectively 0.5, which is equal to the preset threshold value. At this time, step S104 is performed, and the captured image frame is set as the reference image and stored. Wherein, the difference between the abscissa of the corners of the mouth A and B can represent the length of the mouth, the difference between the ordinates of the corners of the mouth A and B can represent the height difference of the corners of the mouth, it can be seen that relative to the mouth of the facial image in the reference image, The length of the mouth in the facial image in the collected image frame has increased and the corners of the mouth have risen. Therefore, it can be judged that compared with the mouth in the facial image in the reference image, the mouth in the collected image frame has changed into a smiling state. Compared with the mouth in the reference image collected The facial images in the image frames have expression changes.
S104,在所述差异值大于或等于阈值时,将所述采集图像帧设置为参考图像并存储,执行获取所述摄像头采集的含有面部图像的采集图像帧的步骤。S104. When the difference value is greater than or equal to a threshold, set the captured image frame as a reference image and store it, and perform a step of acquiring a captured image frame including a facial image captured by the camera.
本实施例在将所述采集图像帧设置为参考图像并存储之后,若本次连拍所需采集的图像并未完成可转而循环执行步骤S102~S104直到完成本次连拍。In this embodiment, after the captured image frame is set as a reference image and stored, if the captured images for this continuous shooting have not been completed, steps S102-S104 may be executed in a loop until the current continuous shooting is completed.
在另一可选实施例中,在执行将所述采集图像帧设置为参考图像并存储步骤之后,所述方法还包括:判断所述作为参考图像并存储的图像帧的数量是否大于或等于第一阈值,若是,结束连拍;否则,执行所述获取所述摄像头采集的含有面部图像的采集图像帧的步骤。In another optional embodiment, after performing the step of setting the collected image frames as reference images and storing them, the method further includes: judging whether the number of image frames stored as reference images is greater than or equal to the first a threshold, if so, end the continuous shooting; otherwise, execute the step of acquiring the captured image frame containing the facial image captured by the camera.
具体实现中,可在启动摄像头选择连拍模式的同时指定本次连拍所需拍摄的图像数量作为第一阈值,通常终端提供4连拍、6连拍、8连拍或者16连拍模式选项供用户选择,还可在选择连拍模式后自定义连拍所需拍摄图像的数量。终端在连拍过程中每拍摄一张图像,便判断本次连拍所拍摄的图像是否达到预设的第一阈值,若是,则结束本次连拍;若否,继续循环执行步骤S102~S104直到完成本次连拍。In the specific implementation, the number of images required for this continuous shooting can be specified as the first threshold when starting the camera to select the continuous shooting mode. Usually, the terminal provides 4 continuous shooting, 6 continuous shooting, 8 continuous shooting or 16 continuous shooting mode options It is for users to choose, and the number of images required for continuous shooting can also be customized after selecting the continuous shooting mode. Each time the terminal captures an image during the continuous shooting process, it judges whether the image captured by this continuous shooting reaches the preset first threshold value, and if so, ends this continuous shooting; if not, continues to execute steps S102-S104 in a loop until the continuous shooting is completed.
在又一可选实施例中,在执行接收连拍启动指令步骤之后,所述方法还包括:获取所述连拍启动指令的起始时间戳。In yet another optional embodiment, after performing the step of receiving the continuous shooting start instruction, the method further includes: acquiring a start time stamp of the continuous shooting start instruction.
相应的,所述获取所述摄像头采集的含有面部图像的采集图像帧的步骤还包括:获取当前时间戳,根据所述当前时间戳和所述起始时间戳计算连拍时长,判断所述连拍时长是否大于或等于第二阈值,若是,结束连拍;否则,执行所述获取所述摄像头采集的含有面部图像的采集图像帧的步骤。Correspondingly, the step of acquiring the captured image frame containing the facial image captured by the camera further includes: acquiring the current time stamp, calculating the continuous shooting duration according to the current time stamp and the starting time stamp, and judging the Whether the shooting duration is greater than or equal to the second threshold, if so, end the continuous shooting; otherwise, execute the step of acquiring the captured image frame containing the facial image captured by the camera.
在具体实现中,可预先在选择连拍模式时,设置本次连拍所需的连拍时长作为第二阈值,从终端接收到接收连拍启动指令之后,开始计时,终端在连拍过程中每拍摄一张图像,便判断本次连拍时长是否达到预设的第二阈值,若是,则结束本次连拍;若否,继续循环执行步骤S102~S104直到本次连拍时长达到预设阈值从而完成本次连拍。In a specific implementation, when the continuous shooting mode is selected in advance, the continuous shooting duration required for this continuous shooting can be set as the second threshold. After receiving the continuous shooting start command from the terminal, start timing. Each time an image is taken, it is judged whether the duration of the continuous shooting reaches the preset second threshold, and if so, the continuous shooting is ended; if not, steps S102-S104 are continuously executed in a loop until the duration of the continuous shooting reaches the preset value Threshold to complete this continuous shooting.
在又一可选实施例中,在执行将所述采集图像帧设置为参考图像并存储的步骤之后,所述方法还包括:接收动态图生成指令,获取作为参考图像并被存储的图像帧,根据所述作为参考图像并被存储的图像帧生成动态图。In yet another optional embodiment, after performing the step of setting the acquired image frame as a reference image and storing it, the method further includes: receiving a dynamic image generation instruction, acquiring an image frame that is used as a reference image and stored, A dynamic image is generated according to the image frame stored as a reference image.
具体实现中,在本次连拍结束后,终端可直接或者根据用户需求将本次连拍拍摄的图像制作成为GIF动态图,提高连拍模式的趣味性。In a specific implementation, after the continuous shooting is finished, the terminal can directly or according to the needs of the user make the image captured by the continuous shooting into a GIF dynamic image, so as to improve the fun of the continuous shooting mode.
在又一可选实施例中,步骤S102可包括步骤:In yet another optional embodiment, step S102 may include the steps of:
11)、获取所述摄像头采集的含有面部图像的多张缓存图像帧;11), obtaining multiple cached image frames containing facial images collected by the camera;
12)、计算所述多张缓存图像帧中各缓存图像帧的对焦评价值;12), calculating the focus evaluation value of each buffered image frame in the plurality of buffered image frames;
13)、选取所述多张缓存图像帧中对焦评价值最大的缓存图像帧作为所述采集图像帧。13) Selecting the cached image frame with the largest focus evaluation value among the plurality of cached image frames as the acquired image frame.
在终端获取第一张连拍图像之后,终端通过摄像头采集含有面部图像的多张缓存图像帧,并同时计算每张图像帧中对焦区域的对焦评价值,其中对焦评价值为对焦区域的清晰度参考值(如锐度值),然后从其中选择对焦评价值最大的缓存图像帧作为采集图像帧,可获取相对较清晰的采集图像帧以增加连拍图像的清晰度,具有一定的防抖效果。After the terminal acquires the first continuous shooting image, the terminal collects multiple cached image frames containing facial images through the camera, and simultaneously calculates the focus evaluation value of the focus area in each image frame, where the focus evaluation value is the sharpness of the focus area Reference value (such as sharpness value), and then select the cached image frame with the largest focus evaluation value as the captured image frame, which can obtain a relatively clear captured image frame to increase the clarity of the continuous shooting image, and has a certain anti-shake effect .
本发明实施例,可接收连拍启动指令,获取摄像头采集的含有面部图像的起始图像帧,将所述起始图像帧作为参考图像并存储;还可获取所述摄像头采集的含有面部图像的采集图像帧;检测所述采集图像帧中的人脸特征信息与所述参考图像的人脸特征信息的差异值;进而在所述差异值大于阈值时,将所述采集图像帧设置为参考图像并存储,执行获取所述摄像头采集的含有面部图像的采集图像帧的步骤,可节约终端的存储资源,提高连拍过程中抓取有用图像帧的准确度,提高了用户体验。In the embodiment of the present invention, a continuous shooting start instruction can be received, the initial image frame containing the facial image collected by the camera can be obtained, and the initial image frame can be used as a reference image and stored; Collecting an image frame; detecting the difference value between the face feature information in the collected image frame and the face feature information of the reference image; and then setting the collected image frame as a reference image when the difference value is greater than a threshold and storing, and performing the step of acquiring the captured image frame containing the facial image captured by the camera can save storage resources of the terminal, improve the accuracy of capturing useful image frames in the continuous shooting process, and improve user experience.
参见图2,是本发明实施例提供的另一种连拍方法流程示意图,如图所示的连拍方法可包括以下步骤:Referring to FIG. 2 , it is a schematic flowchart of another continuous shooting method provided by an embodiment of the present invention. The continuous shooting method as shown in the figure may include the following steps:
S201,接收连拍启动指令,获取摄像头采集的含有面部图像的起始图像帧,将所述起始图像帧作为参考图像并存储。S201. Receive a continuous shooting start instruction, acquire a starting image frame including a facial image collected by a camera, and store the starting image frame as a reference image.
S202,获取所述摄像头采集的含有面部图像的采集图像帧。S202. Acquire a captured image frame including a facial image captured by the camera.
S203,分别获取所述采集图像帧和所述参考图像中人脸特征的坐标值。S203. Obtain coordinate values of facial features in the collected image frame and the reference image respectively.
具体实现中,本步骤可基于人脸识别技术,首先对采集图像帧和参考图像进行人脸检测(在复杂的背景中判断是否存在面部图像并将其从背景中分割出来)以获取其包含的面部图像,然后对采集图像帧和参考图像中包含的面部图像进行采样以获取人脸特征点的坐标值。In the specific implementation, this step can be based on the face recognition technology. Firstly, face detection is performed on the collected image frame and the reference image (judging whether there is a facial image in a complex background and segmenting it from the background) to obtain the information contained in it. The facial image, and then sample the facial image contained in the acquisition image frame and the reference image to obtain the coordinate values of the facial feature points.
S204,判断参考图像中人脸特征的坐标值与所述采集图像帧中人脸特征的坐标值之间的差值是否大于或等于阈值。S204. Determine whether the difference between the coordinate value of the facial feature in the reference image and the coordinate value of the facial feature in the captured image frame is greater than or equal to a threshold.
若参考图像中人脸特征的坐标值与采集图像帧中人脸特征的坐标值之间的差值大于或等于阈值继续执行步骤S205。例如,设置人脸特征中嘴角特征点A点和B点的横纵坐标差异值的阈值为0.5,在对采集图像帧和参考图像做人脸检测和人脸特征信息提取后,得到采集图像帧中嘴角A点和B点的坐标分别为(0.5,1.5)和(2.5,1.5),参考图像中嘴角A点和B点的坐标分别为(1,1)和(2,1),因此可检测到采集图像帧和参考图像中A点和B点的横纵坐标差异值为分别为0.5等于预设的阈值,此时执行步骤S205。可选的,若参考图像中人脸特征的坐标值与采集图像帧中人脸特征的坐标值之间的差值小于阈值,转而执行步骤S202。If the difference between the coordinates of the facial features in the reference image and the coordinates of the facial features in the captured image frame is greater than or equal to the threshold, continue to step S205. For example, set the threshold of the difference between the horizontal and vertical coordinates of the mouth corner feature points A and B in the face features to 0.5, after performing face detection and face feature information extraction on the collected image frame and reference image, the obtained image frame The coordinates of point A and point B of the corner of the mouth are (0.5,1.5) and (2.5,1.5) respectively, and the coordinates of point A and point B of the corner of the mouth in the reference image are (1,1) and (2,1) respectively, so it can be detected The difference values of the horizontal and vertical coordinates of point A and point B in the captured image frame and the reference image are respectively 0.5 and equal to the preset threshold, and step S205 is executed at this time. Optionally, if the difference between the coordinates of the facial features in the reference image and the coordinates of the facial features in the captured image frame is smaller than the threshold, go to step S202.
S205,将所述采集图像帧设置为参考图像并存储。S205. Set and store the captured image frame as a reference image.
S206,判断所述作为参考图像并存储的图像帧的数量是否大于或等于第一阈值。S206. Determine whether the number of image frames stored as reference images is greater than or equal to a first threshold.
具体实现中可在启动摄像头选择连拍模式的同时指定本次连拍所需拍摄的图像数量作为第一阈值,通常终端提供4连拍、6连拍、8连拍或者16连拍模式选项供用户选择,还可在选择连拍模式后自定义连拍所需拍摄图像的数量。终端在连拍过程中每拍摄一张图像,便判断本次连拍所拍摄的图像是否达到预设的第一阈值,若是,则结束本次连拍;若否,转而执行步骤S202~S206直到完成本次连拍。In the specific implementation, the number of images required for this continuous shooting can be designated as the first threshold while starting the camera to select the continuous shooting mode. Usually, the terminal provides 4 continuous shooting, 6 continuous shooting, 8 continuous shooting or 16 continuous shooting mode options for selection. The user can choose, and can also customize the number of images required for continuous shooting after selecting the continuous shooting mode. Each time the terminal captures an image during the continuous shooting process, it judges whether the image captured by this continuous shooting reaches the preset first threshold, and if so, ends this continuous shooting; if not, proceeds to steps S202-S206 until the continuous shooting is completed.
S207,结束连拍。S207, end the continuous shooting.
本发明实施例,可在接收连拍启动指令时获取摄像头采集的含有面部图像的起始图像帧,将所述起始图像帧作为参考图像并存储,还可获取所述摄像头采集的含有面部图像的采集图像帧,进而分别获取所述采集图像帧和所述参考图像中人脸特征的坐标值并在判断参考图像中人脸特征的坐标值与所述采集图像帧中人脸特征的坐标值之间的差值大于或等于阈值时,将所述采集图像帧设置为参考图像并存储,然后判断所述作为参考图像并存储的图像帧的数量是否大于或等于第一阈值,并在其判断结果为是时结束本次连拍,可节约终端的存储资源,提高连拍过程中抓取有用图像帧的准确度,提高了用户体验。In the embodiment of the present invention, the initial image frame containing the facial image collected by the camera can be obtained when receiving the continuous shooting start instruction, and the initial image frame can be used as a reference image and stored, and the image containing the facial image collected by the camera can also be obtained The collected image frames, and then obtain the coordinate values of the facial features in the collected image frame and the reference image respectively, and determine the coordinate values of the facial features in the reference image and the coordinate values of the facial features in the collected image frame When the difference between them is greater than or equal to a threshold value, set the captured image frame as a reference image and store it, then judge whether the number of image frames stored as a reference image is greater than or equal to a first threshold value, and judge The result is that it is time to end the continuous shooting, which can save the storage resources of the terminal, improve the accuracy of capturing useful image frames during the continuous shooting, and improve user experience.
参见图3,本发明实施例提供的一种连拍装置结构示意图,如图所示的连拍装置至少可以包括:图像帧获取单元301,参考图像设置单元302和人脸特征检测单元303。Referring to FIG. 3 , a schematic structural diagram of a continuous shooting device provided by an embodiment of the present invention, as shown in the figure, the continuous shooting device may at least include: an image frame acquisition unit 301 , a reference image setting unit 302 and a facial feature detection unit 303 .
图像帧获取单元301,用于接收连拍启动指令,获取摄像头采集的含有面部图像的起始图像帧。The image frame acquiring unit 301 is configured to receive a continuous shooting start instruction, and acquire an initial image frame including a facial image captured by the camera.
具体实现中,可首先启动摄像头并在终端的设置菜单中选择连拍模式,摄像头进入取景模式后便进行自动对焦,用户在准备好姿势时输入拍照指令便视为输入连拍启动指令,其中,拍照指令的输入有多种方式,可以是用户点击拍照键触发的,也可以是终端在延时拍照模式下,终端的拍照延时到达时触发的。在本实施例中的连拍方法,其主要应用场景为采集包含人物面部图像的连拍图像,因此图像帧获取单元301接收到连拍指令时,图像帧获取单元301通过摄像头采集含有面部图像的起始图像帧,参考图像设置单元302将所述起始图像帧作为参考图像并存储,作为连拍过程中拍摄到的第一张图像。In the specific implementation, you can first start the camera and select the continuous shooting mode in the setting menu of the terminal. After the camera enters the viewfinder mode, it will automatically focus. When the user enters the camera command when the posture is ready, it will be regarded as the continuous shooting start command. Among them, There are many ways to input the camera command, which may be triggered by the user clicking the camera button, or triggered when the terminal is in the delayed camera mode and the terminal's camera delay arrives. In the continuous shooting method in this embodiment, its main application scenario is to collect continuous shooting images that include facial images of people. Therefore, when the image frame acquisition unit 301 receives a continuous shooting instruction, the image frame acquisition unit 301 collects images containing facial images through the camera. The starting image frame, the reference image setting unit 302 stores the starting image frame as a reference image as the first image captured in the continuous shooting process.
参考图像设置单元302,用于将所述起始图像帧作为参考图像并存储。The reference image setting unit 302 is configured to use the starting image frame as a reference image and store it.
所述图像帧获取单元301,还用于获取所述摄像头采集的含有面部图像的采集图像帧。The image frame acquiring unit 301 is further configured to acquire the captured image frame including the facial image captured by the camera.
参考图像设置单元302将所述起始图像帧作为参考图像后,摄像头继续取景并进行自动对焦,以采集含有面部图像的采集图像帧,等待拍摄连拍过程中其他图像。After the reference image setting unit 302 uses the initial image frame as a reference image, the camera continues to find a view and perform auto-focus to collect image frames containing facial images, waiting for other images in the continuous shooting process.
人脸特征检测单元303,用于检测所述采集图像帧中的人脸特征信息与所述参考图像的人脸特征信息的差异值。A face feature detection unit 303, configured to detect a difference value between the face feature information in the captured image frame and the face feature information in the reference image.
具体实现中,人脸特征检测单元303可基于人脸识别技术,首先对采集图像帧和参考图像进行人脸检测(在复杂的背景中判断是否存在面部图像并将其从背景中分割出来)以获取其包含的面部图像,然后对采集图像帧和参考图像中包含的面部图像进行采样以进行人脸特征信息提取,进而检测所述采集图像帧中的人脸特征信息与所述参考图像的人脸特征信息的差异值。人脸检测的方法包括但不限于:参考模板法、人脸规则法、样品学习法、肤色模型法或特征子脸法等。人脸特征信息的提取通常包括:特征向量法和面纹模板法。其中,人脸识别的特征向量法是指:确定眼虹膜、鼻翼、嘴角等面像五官轮廓在的大小、位置、距离等属性,然后计算五官轮廓中各特征点的几何特征量(欧氏距离、曲率、角度)或代数特征量(矩阵特征矢量)用以描述图像帧中面部图像的面部特征。In the specific implementation, the face feature detection unit 303 can be based on the face recognition technology, and first perform face detection on the captured image frame and the reference image (judging whether there is a facial image in a complex background and segmenting it from the background) to Obtain the facial image contained in it, then sample the facial image contained in the collected image frame and the reference image to extract the facial feature information, and then detect the facial feature information in the collected image frame and the human face of the reference image Difference value of face feature information. Face detection methods include but are not limited to: reference template method, face rule method, sample learning method, skin color model method or feature sub-face method, etc. The extraction of face feature information usually includes: feature vector method and face pattern template method. Among them, the eigenvector method of face recognition refers to: determine the size, position, distance and other attributes of the facial features contours such as iris, nose wing, and mouth corners, and then calculate the geometric feature quantity (Euclidean distance) of each feature point in the facial features contours , curvature, angle) or algebraic feature quantity (matrix feature vector) is used to describe the facial features of the facial image in the image frame.
在本实施例中,可获取采集图像帧和参考图像中面部图像的面部特征点(如嘴角、鼻翼、眼角、眼廓、鼻尖、眉头、眉尾)在图像帧中的坐标信息,然后分别检测采集图像帧中各面部特征点与参考图像中各相应的面部特征点的坐标差值,以判断采集图像帧中的面部图像是否有表情或者位置变化,例如眉头之间的距离的变化可判断人脸是否有皱眉头的表情,鼻尖与眼角之间的距离变化可判断人脸中是否有皱鼻子的表情。In this embodiment, the coordinate information of facial feature points (such as mouth corners, nose wings, eye corners, eye contours, nose tip, brow head, brow tail) in the image frame can be obtained, and then detected respectively. Collect the coordinate difference between each facial feature point in the image frame and each corresponding facial feature point in the reference image to judge whether the facial image in the collected image frame has an expression or a position change, for example, the change in the distance between the brows can judge the human Whether the face has a frowning expression, the distance between the tip of the nose and the corner of the eye can be used to judge whether there is a wrinkled nose expression on the face.
例如,设置人脸特征中嘴角特征点A点和B点的横纵坐标差异值的阈值为0.5,在对采集图像帧和参考图像做人脸检测和人脸特征信息提取后,得到采集图像帧中嘴角A点和B点的坐标分别为(0.5,1.5)和(2.5,1.5),参考图像中嘴角A点和B点的坐标分别为(1,1)和(2,1),因此可检测到采集图像帧和参考图像中A点和B点的横纵坐标差异值为分别为0.5,等于预设的阈值,所述参考图像设置单元302将所述采集图像帧设置为参考图像并存储。其中,嘴角A点和B点的横坐标之差可代表嘴巴的长度,嘴角A点和B点的纵坐标之差可代表所述嘴角的高度差,可知相对于参考图像中面部图像的嘴巴,采集图像帧中面部图像中的嘴巴长度有所增加并且嘴角有所上扬,因此可判断相对于参考图像中面部图像中的嘴巴,采集图像帧中的嘴巴有变为笑容状态,相对于参考图像采集图像帧中的面部图像是有表情变化的。For example, set the threshold of the difference between the horizontal and vertical coordinates of the mouth corner feature points A and B in the face features to 0.5, after performing face detection and face feature information extraction on the collected image frame and reference image, the obtained image frame The coordinates of point A and point B of the corner of the mouth are (0.5,1.5) and (2.5,1.5) respectively, and the coordinates of point A and point B of the corner of the mouth in the reference image are (1,1) and (2,1) respectively, so it can be detected The difference values of the horizontal and vertical coordinates of point A and point B in the captured image frame and the reference image are respectively 0.5, which is equal to the preset threshold, and the reference image setting unit 302 sets the captured image frame as a reference image and stores it. Wherein, the difference between the abscissa of the corners of the mouth A and B can represent the length of the mouth, the difference between the ordinates of the corners of the mouth A and B can represent the height difference of the corners of the mouth, it can be seen that relative to the mouth of the facial image in the reference image, The length of the mouth in the facial image in the collected image frame has increased and the corners of the mouth have risen. Therefore, it can be judged that compared with the mouth in the facial image in the reference image, the mouth in the collected image frame has changed into a smiling state. Compared with the mouth in the reference image collected The facial images in the image frames have expression changes.
所述参考图像设置单元302,还用于在所述差异值大于或等于阈值时,将所述采集图像帧设置为参考图像并存储。The reference image setting unit 302 is further configured to set and store the captured image frame as a reference image when the difference value is greater than or equal to a threshold.
本实施例在所述参考图像设置单元302将所述采集图像帧设置为参考图像并存储之后,若本次连拍所需采集的图像并未完成可继续通过所述图像帧获取单元301,所述参考图像设置单元302和所述人脸特征检测单元303完成连拍图像的采集直到本次连拍结束。In this embodiment, after the reference image setting unit 302 sets the captured image frame as a reference image and stores it, if the captured image for this continuous shooting is not completed, it can continue to pass through the image frame acquiring unit 301. The reference image setting unit 302 and the face feature detection unit 303 complete the collection of continuous shooting images until the current continuous shooting ends.
在另一可选实施例中,所述连拍装置还包括:图像数量判断单元304,用于所述参考图像设置单元将所述采集图像帧设置为参考图像并存储之后,判断所述作为参考图像并存储的图像帧的数量是否大于或等于第一阈值,若是,结束连拍;否则,所述图像帧获取单元301获取所述摄像头采集的含有面部图像的采集图像帧。In another optional embodiment, the continuous shooting device further includes: an image quantity judging unit 304, which is used for judging that the captured image frame is used as a reference image after the reference image setting unit sets and stores the captured image frame as a reference image Whether the number of image frames stored in the image is greater than or equal to the first threshold, if so, end the continuous shooting; otherwise, the image frame acquisition unit 301 acquires the captured image frame containing the facial image captured by the camera.
具体实现中,可在启动摄像头选择连拍模式的同时指定本次连拍所需拍摄的图像数量作为第一阈值,通常终端提供4连拍、6连拍、8连拍或者16连拍模式选项供用户选择,还可在选择连拍模式后自定义连拍所需拍摄图像的数量。在连拍过程中每拍摄一张图像,图像数量判断单元304便判断本次连拍所拍摄的图像是否达到预设的第一阈值,若是,则结束本次连拍;若否,继续通过所述图像帧获取单元301,所述参考图像设置单元302和所述人脸特征检测单元303拍摄连拍图像直到完成本次连拍。In the specific implementation, the number of images required for this continuous shooting can be specified as the first threshold when starting the camera to select the continuous shooting mode. Usually, the terminal provides 4 continuous shooting, 6 continuous shooting, 8 continuous shooting or 16 continuous shooting mode options It is for users to choose, and the number of images required for continuous shooting can also be customized after selecting the continuous shooting mode. Every time an image is taken during the continuous shooting process, the image quantity judging unit 304 judges whether the image taken by this continuous shooting reaches the preset first threshold, if so, then ends this continuous shooting; The image frame acquisition unit 301, the reference image setting unit 302 and the face feature detection unit 303 take continuous shooting images until the continuous shooting is completed.
在又一可选实施例中,所述连拍装置还包括:时间戳获取单元305,用于所述图像帧获取单元接收连拍启动指令之后,获取所述连拍启动指令的起始时间戳。In yet another optional embodiment, the continuous shooting device further includes: a time stamp acquisition unit 305, configured to acquire the start time stamp of the continuous shooting start instruction after the image frame acquisition unit receives the continuous shooting start instruction .
所述时间戳获取单元305,还用于获取当前时间戳。The timestamp acquiring unit 305 is also configured to acquire the current timestamp.
相应的,所述连拍装置还包括:连拍时长判断单元306,用于根据所述当前时间戳和所述起始时间戳计算连拍时长,判断所述连拍时长是否大于或等于第二阈值,若是,结束连拍;否则,所述图像帧获取单元301获取所述摄像头采集的含有面部图像的采集图像帧。Correspondingly, the continuous shooting device further includes: a continuous shooting duration judging unit 306, configured to calculate the continuous shooting duration according to the current timestamp and the starting timestamp, and judge whether the continuous shooting duration is greater than or equal to the second threshold, if so, end the continuous shooting; otherwise, the image frame acquisition unit 301 acquires the captured image frame containing the facial image captured by the camera.
在具体实现中,可预先在选择连拍模式时,设置本次连拍所需的连拍时长作为第二阈值,从终端接收到接收连拍启动指令之后,时间戳获取单元305获取连拍起始时间,开始计时,在连拍过程中每拍摄一张图像,时间戳获取单元305获取当前时间以计算连拍时长,连拍时长判断单元306便判断本次连拍时长是否达到预设的第二阈值,若是,则结束本次连拍;若否,继续通过所述图像帧获取单元301,所述参考图像设置单元302、所述人脸特征检测单元303拍摄连拍图像直到本次连拍时长达到预设阈值从而完成本次连拍。In a specific implementation, when the continuous shooting mode is selected in advance, the continuous shooting duration required for this continuous shooting can be set as the second threshold. After receiving the continuous shooting start instruction from the terminal, the timestamp acquisition unit 305 obtains Start time, start counting, and each time an image is taken during the continuous shooting process, the timestamp acquisition unit 305 obtains the current time to calculate the continuous shooting duration, and the continuous shooting duration judging unit 306 judges whether the continuous shooting duration has reached the preset second Two thresholds, if so, then end this continuous shooting; if not, continue to pass through the image frame acquisition unit 301, the reference image setting unit 302, the facial feature detection unit 303 to take continuous shooting images until this continuous shooting The continuous shooting is completed when the duration reaches the preset threshold.
在又一可选实施例中,所述连拍装置还包括:动态图合成单元307,用于所述参考图像设置单元将所述采集图像帧设置为参考图像并存储之后,接收动态图生成指令,获取作为参考图像并被存储的图像帧,根据所述作为参考图像并被存储的图像帧生成动态图。In yet another optional embodiment, the continuous shooting device further includes: a dynamic image synthesis unit 307, configured to receive a dynamic image generation instruction after the reference image setting unit sets the captured image frame as a reference image and stores it , acquiring an image frame that is used as a reference image and stored, and generating a dynamic image based on the image frame that is used as a reference image and stored.
在连拍结束后,动态图合成单元307可直接或者根据用户需求将本次连拍拍摄的图像制作成为GIF动态图,提高连拍模式的趣味性。After the continuous shooting is finished, the dynamic image synthesizing unit 307 can directly or according to the needs of users make the images captured by this continuous shooting into GIF dynamic images, so as to improve the fun of the continuous shooting mode.
参见图4,本发明实施例提供的图像帧获取单元结构示意图,如图所示的图像帧获取单元301可以包括:缓存图像帧获取子单元3101,对焦评价值计算子单元3102,采集图像帧选取子单元3103。Referring to FIG. 4 , a schematic structural diagram of an image frame acquisition unit provided by an embodiment of the present invention, the image frame acquisition unit 301 as shown in the figure may include: a cached image frame acquisition subunit 3101, a focus evaluation value calculation subunit 3102, and an acquired image frame selection Subunit 3103.
缓存图像帧获取子单元3101,用于获取所述摄像头采集的含有面部图像的多张缓存图像帧。The cached image frame acquisition subunit 3101 is configured to acquire multiple cached image frames containing facial images collected by the camera.
对焦评价值计算子单元3102,用于计算所述多张缓存图像帧中各缓存图像帧的对焦评价值。The focus evaluation value calculation subunit 3102 is configured to calculate the focus evaluation value of each buffered image frame in the plurality of buffered image frames.
采集图像帧选取子单元3103,用于选取所述多张缓存图像帧中对焦评价值最大的缓存图像帧作为所述采集图像帧。The captured image frame selection subunit 3103 is configured to select the cached image frame with the largest focus evaluation value among the plurality of cached image frames as the captured image frame.
在获取第一张连拍图像之后,缓存图像帧获取子单元3101通过摄像头采集含有面部图像的多张缓存图像帧,对焦评价值计算子单元3102计算每张缓存图像帧中对焦区域的对焦评价值,其中对焦评价值为对焦区域的清晰度参考值(如锐度值),然后采集图像帧选取子单元3103从其中选择对焦评价值最大的缓存图像帧作为采集图像帧,可获取相对较清晰的采集图像帧以增加连拍图像的清晰度,具有一定的防抖效果。After acquiring the first continuous shooting image, the buffered image frame acquisition subunit 3101 collects multiple buffered image frames containing facial images through the camera, and the focus evaluation value calculation subunit 3102 calculates the focus evaluation value of the focus area in each buffered image frame , where the focus evaluation value is the sharpness reference value (such as the sharpness value) of the focus area, and then the captured image frame selection subunit 3103 selects the cached image frame with the largest focus evaluation value as the captured image frame, so that a relatively clear image can be obtained Capture image frames to increase the clarity of continuous shooting images, which has a certain anti-shake effect.
参见图5,本发明实施例提供的人脸特征检测单元结构示意图,如图所示的人脸特征检测单元303可以包括:人脸特征获取子单元3301和人脸特征判断子单元3303。Referring to FIG. 5 , a schematic structural diagram of a face feature detection unit provided by an embodiment of the present invention, the face feature detection unit 303 as shown in the figure may include: a face feature acquisition subunit 3301 and a face feature judgment subunit 3303 .
人脸特征获取子单元3301,用于获取所述采集图像帧中人脸特征的坐标值。The face feature acquisition subunit 3301 is configured to acquire the coordinate values of the face features in the captured image frame.
具体实现中,人脸特征获取子单元3301可基于人脸识别技术,首先对采集图像帧进行人脸检测(在复杂的背景中判断是否存在面部图像并将其从背景中分割出来)以获取其包含的面部图像,然后对采集图像帧和参考图像中包含的面部图像进行采样以获取人脸特征点的坐标值。In a specific implementation, the face feature acquisition subunit 3301 can first perform face detection on the captured image frame (judging whether there is a face image in a complex background and segmenting it from the background) based on face recognition technology to obtain its The face image contained in the frame and the face image contained in the reference image are then sampled to obtain the coordinate values of the face feature points.
所述人脸特征获取子单元3301,还用于获取所述参考图像中人脸特征的坐标值。The facial feature acquiring subunit 3301 is also used to acquire the coordinate values of the facial features in the reference image.
人脸特征判断子单元3302,用于判断所述参考图像中人脸特征的坐标值与所述采集图像帧中人脸特征的坐标值之间的差值是否大于阈值,若是,所述参考图像设置单元302将所述采集图像帧设置为参考图像并存储;若否,所述图像帧获取单元301获取所述摄像头采集的含有面部图像的采集图像帧。The face feature judging subunit 3302 is used to judge whether the difference between the coordinate value of the face feature in the reference image and the coordinate value of the face feature in the captured image frame is greater than a threshold, if so, the reference image The setting unit 302 sets the captured image frame as a reference image and stores it; if not, the image frame acquiring unit 301 acquires the captured image frame containing the facial image captured by the camera.
例如,设置人脸特征中嘴角特征点A点和B点的横纵坐标差异值的阈值为0.5,人脸特征获取子单元3301获取采集图像帧中嘴角A点和B点的坐标分别为(0.5,1.5)和(2.5,1.5),参考图像中嘴角A点和B点的坐标分别为(1,1)和(2,1),因此人脸特征判断子单元3302可检测到采集图像帧和参考图像中A点和B点的横纵坐标差异值为分别为0.5,等于预设的阈值,此时参考图像设置单元302将所述采集图像帧设置为参考图像并存储。For example, the threshold value of the horizontal and vertical coordinate difference value of the mouth corner feature point A point and B point in the face feature is set to 0.5, and the face feature acquisition subunit 3301 obtains the coordinates of the mouth corner A point and B point in the captured image frame as (0.5 , 1.5) and (2.5, 1.5), the coordinates of point A and point B of the corner of the mouth in the reference image are (1, 1) and (2, 1) respectively, so the face feature judgment subunit 3302 can detect the captured image frame and The difference values of horizontal and vertical coordinates of point A and point B in the reference image are respectively 0.5, which is equal to the preset threshold. At this time, the reference image setting unit 302 sets the captured image frame as a reference image and stores it.
以上对本发明实施例公开的一种连拍方法及装置进行了详细介绍,以上所揭露的仅为本发明较佳实施例而已,当然不能以此来限定本发明之权利范围,因此依本发明权利要求所作的等同变化,仍属本发明所涵盖的范围。A continuous shooting method and device disclosed in the embodiments of the present invention have been described in detail above. What is disclosed above is only a preferred embodiment of the present invention. Of course, the scope of rights of the present invention cannot be limited by this. Therefore, according to the rights of the present invention The equivalent changes required to be made still belong to the scope covered by the present invention.
Claims (12)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510058180.3A CN104683692B (en) | 2015-02-04 | 2015-02-04 | A kind of continuous shooting method and device |
CN201710877073.2A CN107483834B (en) | 2015-02-04 | 2015-02-04 | Image processing method, continuous shooting method and device and related medium product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510058180.3A CN104683692B (en) | 2015-02-04 | 2015-02-04 | A kind of continuous shooting method and device |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710877073.2A Division CN107483834B (en) | 2015-02-04 | 2015-02-04 | Image processing method, continuous shooting method and device and related medium product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104683692A true CN104683692A (en) | 2015-06-03 |
CN104683692B CN104683692B (en) | 2017-10-17 |
Family
ID=53318197
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510058180.3A Expired - Fee Related CN104683692B (en) | 2015-02-04 | 2015-02-04 | A kind of continuous shooting method and device |
CN201710877073.2A Expired - Fee Related CN107483834B (en) | 2015-02-04 | 2015-02-04 | Image processing method, continuous shooting method and device and related medium product |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710877073.2A Expired - Fee Related CN107483834B (en) | 2015-02-04 | 2015-02-04 | Image processing method, continuous shooting method and device and related medium product |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN104683692B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105323483A (en) * | 2015-10-27 | 2016-02-10 | 广东欧珀移动通信有限公司 | GIF picture photographing and playing method and GIF picture photographing and playing system |
CN106303235A (en) * | 2016-08-11 | 2017-01-04 | 广东小天才科技有限公司 | Photographing processing method and device |
CN106303234A (en) * | 2016-08-11 | 2017-01-04 | 广东小天才科技有限公司 | photographing processing method and device |
WO2017084220A1 (en) * | 2015-11-20 | 2017-05-26 | 宇龙计算机通信科技(深圳)有限公司 | Photography processing method and apparatus, and terminal |
CN107240143A (en) * | 2017-05-09 | 2017-10-10 | 北京小米移动软件有限公司 | Bag generation method of expressing one's feelings and device |
CN107368777A (en) * | 2017-06-02 | 2017-11-21 | 广州视源电子科技股份有限公司 | Smile action detection method and device and living body identification method and system |
CN108401110A (en) * | 2018-03-18 | 2018-08-14 | 广东欧珀移动通信有限公司 | Image acquisition method and device, storage medium and electronic equipment |
CN110025116A (en) * | 2018-01-11 | 2019-07-19 | 卡西欧计算机株式会社 | Device for informing, report method and recording medium |
CN110769150A (en) * | 2019-09-23 | 2020-02-07 | 珠海格力电器股份有限公司 | Photographing method, device, terminal and computer readable medium |
CN112425156A (en) * | 2019-01-31 | 2021-02-26 | 华为技术有限公司 | Method for selecting images based on continuous shooting and electronic equipment |
CN113239220A (en) * | 2021-05-26 | 2021-08-10 | Oppo广东移动通信有限公司 | Image recommendation method and device, terminal and readable storage medium |
CN114245017A (en) * | 2021-12-21 | 2022-03-25 | 维沃移动通信有限公司 | Shooting method and device and electronic equipment |
TWI777126B (en) * | 2020-01-22 | 2022-09-11 | 中國醫藥大學 | Method of facial characteristic angle measurement and device thereof |
CN119396319A (en) * | 2024-12-31 | 2025-02-07 | 深圳市云希谷科技有限公司 | Image recognition-based platform interaction interface self-adaptive adjustment method and system |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109389086B (en) * | 2018-10-09 | 2021-03-05 | 北京科技大学 | Method and system for detecting unmanned aerial vehicle image target |
CN109447006A (en) * | 2018-11-01 | 2019-03-08 | 北京旷视科技有限公司 | Image processing method, device, equipment and storage medium |
CN109659006B (en) * | 2018-12-10 | 2021-03-23 | 深圳先进技术研究院 | Facial muscle training method and device and electronic equipment |
CN111669504B (en) * | 2020-06-29 | 2021-11-05 | 维沃移动通信有限公司 | Image shooting method and device and electronic equipment |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005275916A (en) * | 2004-03-25 | 2005-10-06 | Toyota Motor Corp | Image recognition apparatus and image recognition method |
JP2005323227A (en) * | 2004-05-11 | 2005-11-17 | Fuji Photo Film Co Ltd | Device, method, and program for picking up image |
CN1731859A (en) * | 2005-09-09 | 2006-02-08 | 北京中星微电子有限公司 | Video compression method and video system using the method |
FR2875322A1 (en) * | 2004-09-14 | 2006-03-17 | Atmel Grenoble Soc Par Actions | Person`s face locating method for e.g. identifying person, involves detecting eyes in digital image produced by performing point to point subtraction between reference image taken before or after eye blink and image taken during blink |
CN101290539A (en) * | 2008-06-12 | 2008-10-22 | 北京中星微电子有限公司 | Electronic equipment usage situation judgement method and system |
JP2009081636A (en) * | 2007-09-26 | 2009-04-16 | Casio Comput Co Ltd | Image recording apparatus and photographing method |
US20100104266A1 (en) * | 2008-10-29 | 2010-04-29 | Canon Kabushiki Kaisha | Information processing apparatus and method of controlling same |
WO2010070820A1 (en) * | 2008-12-17 | 2010-06-24 | パナソニック株式会社 | Image communication device and image communication method |
DE102009049528A1 (en) * | 2009-10-15 | 2011-04-21 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | A method and apparatus for detecting facial movement in a face in an image sequence |
CN102208107A (en) * | 2010-03-30 | 2011-10-05 | 索尼公司 | Image processing device, image processing method, and program |
US20130033568A1 (en) * | 2007-08-29 | 2013-02-07 | Samsung Electronics Co., Ltd. | Method for photographing panoramic picture |
CN103020580A (en) * | 2011-09-23 | 2013-04-03 | 无锡中星微电子有限公司 | Rapid human face detection method |
CN103685948A (en) * | 2013-12-04 | 2014-03-26 | 乐视致新电子科技(天津)有限公司 | A shooting method and device |
CN103856617A (en) * | 2012-12-03 | 2014-06-11 | 联想(北京)有限公司 | Photographing method and user terminal |
US20150016747A1 (en) * | 2013-07-12 | 2015-01-15 | Vivotek Inc. | Image processor and image combination method thereof |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101398475B1 (en) * | 2007-11-21 | 2014-05-26 | 삼성전자주식회사 | Apparatus for processing digital image and method for controlling thereof |
CN103903213B (en) * | 2012-12-24 | 2018-04-27 | 联想(北京)有限公司 | A kind of image pickup method and electronic equipment |
CN103152524B (en) * | 2013-03-05 | 2016-04-06 | 东莞宇龙通信科技有限公司 | Camera arrangement and continuous shooting method thereof |
CN103491299B (en) * | 2013-09-17 | 2016-10-05 | 宇龙计算机通信科技(深圳)有限公司 | One is taken pictures treating method and apparatus |
WO2015058381A1 (en) * | 2013-10-23 | 2015-04-30 | 华为终端有限公司 | Method and terminal for selecting image from continuous images |
-
2015
- 2015-02-04 CN CN201510058180.3A patent/CN104683692B/en not_active Expired - Fee Related
- 2015-02-04 CN CN201710877073.2A patent/CN107483834B/en not_active Expired - Fee Related
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005275916A (en) * | 2004-03-25 | 2005-10-06 | Toyota Motor Corp | Image recognition apparatus and image recognition method |
JP2005323227A (en) * | 2004-05-11 | 2005-11-17 | Fuji Photo Film Co Ltd | Device, method, and program for picking up image |
FR2875322A1 (en) * | 2004-09-14 | 2006-03-17 | Atmel Grenoble Soc Par Actions | Person`s face locating method for e.g. identifying person, involves detecting eyes in digital image produced by performing point to point subtraction between reference image taken before or after eye blink and image taken during blink |
CN1731859A (en) * | 2005-09-09 | 2006-02-08 | 北京中星微电子有限公司 | Video compression method and video system using the method |
US20130033568A1 (en) * | 2007-08-29 | 2013-02-07 | Samsung Electronics Co., Ltd. | Method for photographing panoramic picture |
JP2009081636A (en) * | 2007-09-26 | 2009-04-16 | Casio Comput Co Ltd | Image recording apparatus and photographing method |
CN101290539A (en) * | 2008-06-12 | 2008-10-22 | 北京中星微电子有限公司 | Electronic equipment usage situation judgement method and system |
US20100104266A1 (en) * | 2008-10-29 | 2010-04-29 | Canon Kabushiki Kaisha | Information processing apparatus and method of controlling same |
WO2010070820A1 (en) * | 2008-12-17 | 2010-06-24 | パナソニック株式会社 | Image communication device and image communication method |
DE102009049528A1 (en) * | 2009-10-15 | 2011-04-21 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | A method and apparatus for detecting facial movement in a face in an image sequence |
CN102208107A (en) * | 2010-03-30 | 2011-10-05 | 索尼公司 | Image processing device, image processing method, and program |
CN103020580A (en) * | 2011-09-23 | 2013-04-03 | 无锡中星微电子有限公司 | Rapid human face detection method |
CN103856617A (en) * | 2012-12-03 | 2014-06-11 | 联想(北京)有限公司 | Photographing method and user terminal |
US20150016747A1 (en) * | 2013-07-12 | 2015-01-15 | Vivotek Inc. | Image processor and image combination method thereof |
CN103685948A (en) * | 2013-12-04 | 2014-03-26 | 乐视致新电子科技(天津)有限公司 | A shooting method and device |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105323483A (en) * | 2015-10-27 | 2016-02-10 | 广东欧珀移动通信有限公司 | GIF picture photographing and playing method and GIF picture photographing and playing system |
CN105323483B (en) * | 2015-10-27 | 2018-06-29 | 广东欧珀移动通信有限公司 | GIF picture shootings and playback method and GIF picture shootings and play system |
WO2017084220A1 (en) * | 2015-11-20 | 2017-05-26 | 宇龙计算机通信科技(深圳)有限公司 | Photography processing method and apparatus, and terminal |
CN106303235A (en) * | 2016-08-11 | 2017-01-04 | 广东小天才科技有限公司 | Photographing processing method and device |
CN106303234A (en) * | 2016-08-11 | 2017-01-04 | 广东小天才科技有限公司 | photographing processing method and device |
CN107240143A (en) * | 2017-05-09 | 2017-10-10 | 北京小米移动软件有限公司 | Bag generation method of expressing one's feelings and device |
CN107368777A (en) * | 2017-06-02 | 2017-11-21 | 广州视源电子科技股份有限公司 | Smile action detection method and device and living body identification method and system |
CN110025116A (en) * | 2018-01-11 | 2019-07-19 | 卡西欧计算机株式会社 | Device for informing, report method and recording medium |
CN108401110A (en) * | 2018-03-18 | 2018-08-14 | 广东欧珀移动通信有限公司 | Image acquisition method and device, storage medium and electronic equipment |
CN108401110B (en) * | 2018-03-18 | 2020-09-08 | Oppo广东移动通信有限公司 | Image acquisition method and device, storage medium and electronic equipment |
CN112425156A (en) * | 2019-01-31 | 2021-02-26 | 华为技术有限公司 | Method for selecting images based on continuous shooting and electronic equipment |
CN112425156B (en) * | 2019-01-31 | 2022-03-11 | 华为技术有限公司 | Method for selecting images based on continuous shooting and electronic equipment |
US12003850B2 (en) | 2019-01-31 | 2024-06-04 | Huawei Technologies Co., Ltd. | Method for selecting image based on burst shooting and electronic device |
CN110769150A (en) * | 2019-09-23 | 2020-02-07 | 珠海格力电器股份有限公司 | Photographing method, device, terminal and computer readable medium |
TWI777126B (en) * | 2020-01-22 | 2022-09-11 | 中國醫藥大學 | Method of facial characteristic angle measurement and device thereof |
CN113239220A (en) * | 2021-05-26 | 2021-08-10 | Oppo广东移动通信有限公司 | Image recommendation method and device, terminal and readable storage medium |
CN114245017A (en) * | 2021-12-21 | 2022-03-25 | 维沃移动通信有限公司 | Shooting method and device and electronic equipment |
CN119396319A (en) * | 2024-12-31 | 2025-02-07 | 深圳市云希谷科技有限公司 | Image recognition-based platform interaction interface self-adaptive adjustment method and system |
Also Published As
Publication number | Publication date |
---|---|
CN107483834A (en) | 2017-12-15 |
CN104683692B (en) | 2017-10-17 |
CN107483834B (en) | 2020-01-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104683692B (en) | A kind of continuous shooting method and device | |
CN110532984B (en) | Key point detection method, gesture recognition method, device and system | |
CN108200334B (en) | Image capturing method, device, storage medium and electronic device | |
CN103237172B (en) | A kind of time-lapse shooting method and device | |
WO2017045258A1 (en) | Photographing prompting method, device and apparatus, and nonvolatile computer storage medium | |
TWI678099B (en) | Video processing method and device and storage medium | |
CN111417983A (en) | Deformable object tracking based on event camera | |
KR101944112B1 (en) | Method and apparatus for creating user-created sticker, system for sharing user-created sticker | |
CN104935812B (en) | A kind of method and device for controlling self-timer mode to open | |
CN110020578A (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN109977739A (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN109951636A (en) | Photographing processing method and device, mobile terminal and storage medium | |
CN103685940A (en) | Method for recognizing shot photos by facial expressions | |
CN108229369A (en) | Image capturing method, device, storage medium and electronic equipment | |
CN109348135A (en) | Photographing method and device, storage medium and terminal equipment | |
CN109218614B (en) | Automatic photographing method of mobile terminal and mobile terminal | |
CN104219444B (en) | The processing method and processing device of video capture | |
CN106713768A (en) | Human scene image synthesis method, system and computer equipment | |
CN104519263A (en) | Method for acquiring image and electronic device | |
CN110717452B (en) | Image recognition method, device, terminal and computer readable storage medium | |
CN106503658A (en) | automatic photographing method and mobile terminal | |
CN106331497B (en) | A kind of image processing method and terminal | |
CN105049695A (en) | Video recording method and device | |
CN105007418A (en) | Photographing method and mobile terminal | |
WO2019213818A1 (en) | Photographing control method, and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder |
Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Patentee after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18 Patentee before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. |
|
CP01 | Change in the name or title of a patent holder | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20171017 |
|
CF01 | Termination of patent right due to non-payment of annual fee |