CN115802144B - Video shooting method and related equipment - Google Patents
Video shooting method and related equipment Download PDFInfo
- Publication number
- CN115802144B CN115802144B CN202310006974.XA CN202310006974A CN115802144B CN 115802144 B CN115802144 B CN 115802144B CN 202310006974 A CN202310006974 A CN 202310006974A CN 115802144 B CN115802144 B CN 115802144B
- Authority
- CN
- China
- Prior art keywords
- video
- video frame
- frame data
- mode
- human face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/667—Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本申请涉及智能终端技术领域,尤其涉及一种视频拍摄方法及相关设备。The present application relates to the technical field of smart terminals, in particular to a video shooting method and related equipment.
背景技术Background technique
随着终端技术的发展,用户对电子设备的视频拍摄功能的需求越来越高。目前,电子设备可以根据不同的拍摄场景设置多种视频模式,供用户选择而进行视频拍摄。然而,在实际使用过程中,用户可能不确定当前的拍摄场景,也不知道该选择哪种视频模式,或者即使确定拍摄场景,也不知道如何选择对应的视频模式。例如,在拍摄内容包含有人像时,用户可能会选择人像模式,使得电子设备可以对视频中的人像效果进行优化处理,然而,当人像距离电子设备较远时,人脸在拍摄界面所占的比例可能较小,即使对人像进行处理,人像的优化效果也可能不佳,导致人像视频的拍摄效果不佳,从而影响用户体验。With the development of terminal technology, users have higher and higher demands on video shooting functions of electronic devices. Currently, an electronic device can set multiple video modes according to different shooting scenarios for users to choose for video shooting. However, in actual use, the user may not be sure about the current shooting scene, and does not know which video mode to select, or even if the shooting scene is determined, the user does not know how to select the corresponding video mode. For example, when the shooting content contains a portrait, the user may select the portrait mode, so that the electronic device can optimize the effect of the portrait in the video. However, when the portrait is far away from the electronic device, the proportion of the face in the shooting interface The ratio may be small, and even if the portrait is processed, the optimization effect of the portrait may not be good, resulting in poor shooting effect of the portrait video, thereby affecting the user experience.
发明内容Contents of the invention
鉴于以上内容,有必要提供一种视频拍摄方法及相关设备,解决在拍摄人像视频场景时,由于用户选择的视频模式与人像不匹配而导致人像视频的拍摄效果不佳的技术问题。In view of the above, it is necessary to provide a video shooting method and related equipment to solve the technical problem that the shooting effect of the portrait video is not good because the video mode selected by the user does not match the portrait when shooting the portrait video scene.
第一方面,本申请提供一种视频拍摄方法,所述方法包括:获取摄像头拍摄的视频帧数据,识别所述视频帧数据是否包含人脸;若所述视频帧数据包含人脸,计算人脸区域的尺寸在所述视频帧数据中所占的比例;判断所述人脸区域的尺寸在所述视频帧数据中所占的比例是否大于或等于第一预设值;若所述人脸区域的尺寸在所述视频帧数据中所占的比例大于或等于所述第一预设值,确定推荐的视频模式为人像模式,并基于所述人像模式进行视频拍摄;若所述人脸区域的尺寸在所述视频帧数据中所占的比例小于所述第一预设值,判断所述人脸区域的尺寸在所述视频帧数据中所占的比例是否小于或等于第二预设值,所述第二预设值小于所述第一预设值;若所述人脸区域的尺寸在所述视频帧数据中所占的比例小于或等于第二预设值,确定推荐的视频模式为主角模式,并基于所述主角模式进行视频拍摄。通过上述技术方案,可以在拍摄的人像视频时,基于人脸比例进行视频模式推荐,在人脸比例较大时,推荐以人像模式进行视频拍摄,有效地优化了人像视频的显示效果,在人脸比例较小时,推荐以主角模式进行视频拍摄,有效地优化了主角人像的显示效果。In a first aspect, the present application provides a video shooting method, the method comprising: acquiring video frame data captured by a camera, identifying whether the video frame data contains a human face; if the video frame data contains a human face, calculating the human face The proportion of the size of the area in the video frame data; judging whether the proportion of the size of the human face area in the video frame data is greater than or equal to a first preset value; if the human face area The proportion of the size of the video frame data in the video frame data is greater than or equal to the first preset value, determine that the recommended video mode is the portrait mode, and perform video shooting based on the portrait mode; if the face area The proportion of the size in the video frame data is smaller than the first preset value, and judging whether the proportion of the size of the face area in the video frame data is less than or equal to a second preset value, The second preset value is smaller than the first preset value; if the size of the face area accounts for a proportion of the video frame data that is less than or equal to the second preset value, the recommended video mode is determined to be A protagonist mode, and video shooting is performed based on the protagonist mode. Through the above technical solution, when shooting a portrait video, the video mode recommendation can be made based on the proportion of the face. When the proportion of the face is large, it is recommended to shoot the video in the portrait mode, which effectively optimizes the display effect of the portrait video. When the proportion of the face is small, it is recommended to use the protagonist mode for video shooting, which effectively optimizes the display effect of the protagonist portrait.
在一种可能的实现方式中,所述计算人脸区域的尺寸在所述视频帧数据中所占的比例包括:采用矩形框对所述视频帧数据中识别出的所述人脸区域进行标识;基于所述矩形框确定所述人脸区域的尺寸;计算所述人脸区域的尺寸与视频帧图像的尺寸之间的比例,得到所述人脸区域的尺寸在所述视频帧数据中所占的比例。通过上述技术方案,可以基于标识人脸区域的矩形框精确地计算出人脸区域的尺寸比例。In a possible implementation manner, the calculating the proportion of the size of the human face area in the video frame data includes: using a rectangular frame to identify the human face area identified in the video frame data Determine the size of the human face area based on the rectangular frame; calculate the ratio between the size of the human face area and the size of the video frame image, and obtain the size of the human face area in the video frame data proportion. Through the above technical solution, the size ratio of the face area can be accurately calculated based on the rectangular frame identifying the face area.
在一种可能的实现方式中,所述基于所述矩形框确定所述人脸区域的尺寸包括:将所述视频帧图像中标识所述人脸区域的所述矩形框的宽度值确定为所述人脸区域的宽度值,将所述矩形框的高度值确定为所述人脸区域的高度值。通过上述技术方案,可以基于标识人脸区域的矩形框的尺寸精确地确定人脸区域尺寸的宽度值和高度值。In a possible implementation manner, the determining the size of the face area based on the rectangular frame includes: determining the width value of the rectangular frame that identifies the face area in the video frame image as the The width value of the human face area is determined, and the height value of the rectangular frame is determined as the height value of the human face area. Through the above technical solution, the width value and the height value of the size of the face area can be accurately determined based on the size of the rectangular frame identifying the face area.
在一种可能的实现方式中,所述计算所述人脸区域的尺寸与视频帧图像的尺寸之间的比例包括:计算所述人脸区域的宽度值与所述视频帧图像的宽度值之间的比例。通过上述技术方案,可以快速、精确地计算出人脸区域的尺寸比例。In a possible implementation manner, the calculating the ratio between the size of the face area and the size of the video frame image includes: calculating the ratio between the width value of the face area and the width value of the video frame image ratio between. Through the above technical solution, the size ratio of the face area can be calculated quickly and accurately.
在一种可能的实现方式中,所述计算所述人脸区域的尺寸与视频帧图像的尺寸之间的比例包括:计算所述人脸区域的高度值与所述视频帧图像的高度值之间的比例。通过上述技术方案,可以快速、精确地计算出人脸区域的尺寸比例。In a possible implementation manner, the calculating the ratio between the size of the face area and the size of the video frame image includes: calculating the ratio between the height value of the face area and the height value of the video frame image ratio between. Through the above technical solution, the size ratio of the face area can be calculated quickly and accurately.
在一种可能的实现方式中,所述计算所述人脸区域的尺寸与视频帧图像的尺寸之间的比例包括:计算所述人脸区域的面积与所述视频帧图像的面积之间的比例。通过上述技术方案,可以快速、精确地计算出人脸区域的尺寸比例。In a possible implementation manner, the calculating the ratio between the size of the human face area and the size of the video frame image includes: calculating the ratio between the area of the human face area and the area of the video frame image Proportion. Through the above technical solution, the size ratio of the face area can be calculated quickly and accurately.
在一种可能的实现方式中,所述识别所述视频帧数据是否包含人脸包括:对所述视频帧数据中的每个视频帧图像进行格式转换,得到视频流;对所述视频流中的每个视频帧图像进行人脸识别,判断所述视频帧数据是否包含人脸;若识别到连续预设数量的所述视频帧图像包含人脸,确定所述视频帧数据包含人脸。通过上述技术方案,可以精确地识别出视频帧数据是否包含人脸。In a possible implementation manner, the identifying whether the video frame data contains a human face includes: performing format conversion on each video frame image in the video frame data to obtain a video stream; Face recognition is performed on each video frame image, and it is judged whether the video frame data contains a human face; if it is recognized that a continuous preset number of the video frame images contains a human face, it is determined that the video frame data contains a human face. Through the above technical solution, it can be accurately identified whether the video frame data contains a human face.
在一种可能的实现方式中,所述基于所述人像模式进行视频拍摄包括:对所述摄像头拍摄的所述视频帧数据进行虚化处理。通过上述技术方案,可以在人脸区域的尺寸比例较大时,突出人像,优化人像的显示效果。In a possible implementation manner, the performing video shooting based on the portrait mode includes: performing blur processing on the video frame data captured by the camera. Through the above technical solution, when the size ratio of the face area is relatively large, the portrait can be highlighted, and the display effect of the portrait can be optimized.
在一种可能的实现方式中,所述对所述摄像头拍摄的所述视频帧数据进行虚化处理包括:对视频帧图像进行人像抠图,提取所述视频帧图像中的人像区域;对所述视频帧图像的背景区域进行虚化处理;将提取的所述人像区域与虚化的所述背景区域融合。通过上述技术方案,可以精确地对人像视频进行背景虚化处理,从而突出人像,优化人像的显示效果。In a possible implementation manner, the blurring the video frame data captured by the camera includes: performing portrait matting on the video frame image, extracting the portrait area in the video frame image; The background area of the video frame image is blurred; the extracted portrait area is merged with the blurred background area. Through the above technical solution, it is possible to precisely blur the background of the portrait video, thereby highlighting the portrait and optimizing the display effect of the portrait.
在一种可能的实现方式中,所述对所述视频帧图像的背景区域进行虚化处理包括:对所述背景区域进行高斯模糊处理,得到虚化的所述背景区域。通过上述技术方案,可以提高背景虚化处理的效率。In a possible implementation manner, the performing blur processing on the background area of the video frame image includes: performing Gaussian blur processing on the background area to obtain the blurred background area. Through the above technical solution, the efficiency of background blurring processing can be improved.
在一种可能的实现方式中,所述基于所述主角模式进行视频拍摄包括:拍摄全景视频和主角人像视频;将所述主角人像视频的视频帧数据以画中画的形式显示在所述全景视频的视频帧数据中。通过上述技术方案,可以在人像视频中的人脸区域的尺寸比例较小时,对人像视频进行放大显示,有效优化人像视频的显示效果。In a possible implementation manner, the video shooting based on the protagonist mode includes: shooting a panoramic video and a portrait video of the protagonist; displaying the video frame data of the portrait video of the protagonist in the form of a picture-in-picture in the video frame data of the video. Through the above technical solution, when the size ratio of the face area in the portrait video is relatively small, the portrait video can be enlarged and displayed, and the display effect of the portrait video can be effectively optimized.
第二方面,本申请提供一种电子设备,所述电子设备包括存储器和处理器:其中,所述存储器,用于存储程序指令;所述处理器,用于读取并执行所述存储器中存储的所述程序指令,当所述程序指令被所述处理器执行时,使得所述电子设备执行上述的视频拍摄方法。In a second aspect, the present application provides an electronic device, which includes a memory and a processor: wherein, the memory is used to store program instructions; the processor is used to read and execute instructions stored in the memory. The program instruction, when the program instruction is executed by the processor, causes the electronic device to execute the above video shooting method.
第三方面,本申请提供一种芯片,与电子设备中的存储器耦合,所述芯片用于控制所述电子设备执行上述的视频拍摄方法。In a third aspect, the present application provides a chip coupled with a memory in an electronic device, and the chip is used to control the electronic device to execute the above video shooting method.
第四方面,本申请提供一种计算机存储介质,所述计算机存储介质存储有程序指令,当所述程序指令在电子设备上运行时,使得所述电子设备执行上述的视频拍摄方法。In a fourth aspect, the present application provides a computer storage medium, the computer storage medium stores program instructions, and when the program instructions are run on an electronic device, the electronic device is made to execute the above video shooting method.
另外,第二方面至第四方面所带来的技术效果可参见上述方法部分各设计的方法相关的描述,此处不再赘述。In addition, for the technical effects brought about by the second aspect to the fourth aspect, please refer to the descriptions related to the methods of each design in the above method part, and will not be repeated here.
附图说明Description of drawings
图1是本申请一实施例提供的电子设备的相机应用程序的界面示意图。FIG. 1 is a schematic diagram of an interface of a camera application program of an electronic device provided by an embodiment of the present application.
图2是本申请一实施例提供的电子设备的相机应用程序的另一界面示意图。FIG. 2 is a schematic diagram of another interface of a camera application program of an electronic device provided by an embodiment of the present application.
图3是本申请一实施例提供的电子设备的软件架构图。Fig. 3 is a software architecture diagram of an electronic device provided by an embodiment of the present application.
图4是本申请一实施例提供的视频拍摄方法的流程图。Fig. 4 is a flowchart of a video shooting method provided by an embodiment of the present application.
图5是本申请一实施例提供的视频拍摄系统的架构示意图。FIG. 5 is a schematic structural diagram of a video shooting system provided by an embodiment of the present application.
图6是本申请一实施例提供的利用级联分类器识别人脸图像的效果图。Fig. 6 is an effect diagram of using a cascade classifier to recognize a face image provided by an embodiment of the present application.
图7是本申请一实施例提供的计算人脸区域的尺寸在视频帧数据中所占的比例的流程图。Fig. 7 is a flow chart of calculating the proportion of the size of the face area in the video frame data provided by an embodiment of the present application.
图8是本申请一实施例提供的对摄像头拍摄的视频帧数据进行虚化处理的流程图。FIG. 8 is a flow chart of blurring video frame data captured by a camera according to an embodiment of the present application.
图9是本申请另一实施例提供的视频拍摄系统的架构示意图。FIG. 9 is a schematic structural diagram of a video shooting system provided by another embodiment of the present application.
图10是本申请另一实施例提供的视频拍摄方法的流程图。Fig. 10 is a flowchart of a video shooting method provided by another embodiment of the present application.
图11是本申请另一实施例提供的视频拍摄方法的流程图。Fig. 11 is a flowchart of a video shooting method provided by another embodiment of the present application.
图12是本申请另一实施例提供的视频拍摄方法的部分流程图。Fig. 12 is a partial flowchart of a video shooting method provided by another embodiment of the present application.
图13是本申请另一实施例提供的视频拍摄方法的部分流程图。Fig. 13 is a partial flowchart of a video shooting method provided by another embodiment of the present application.
图14是本申请另一实施例提供的视频拍摄方法的部分流程图。Fig. 14 is a partial flowchart of a video shooting method provided by another embodiment of the present application.
图15是本申请另一实施例提供的视频拍摄方法的部分流程图。Fig. 15 is a partial flowchart of a video shooting method provided by another embodiment of the present application.
图16是本申请另一实施例提供的视频拍摄方法的流程图。Fig. 16 is a flowchart of a video shooting method provided by another embodiment of the present application.
图17是本申请一实施例提供的智能场景检测的决策因子的示意图。Fig. 17 is a schematic diagram of decision factors for intelligent scene detection provided by an embodiment of the present application.
图18是本申请一实施例提供的视频模式的视频规格的示意图。FIG. 18 is a schematic diagram of a video specification of a video mode provided by an embodiment of the present application.
图19是本申请另一实施例提供的视频拍摄方法的流程图。Fig. 19 is a flowchart of a video shooting method provided by another embodiment of the present application.
图20是本申请一实施例提供的电子设备的硬件架构图。FIG. 20 is a hardware architecture diagram of an electronic device provided by an embodiment of the present application.
具体实施方式Detailed ways
本申请实施例中所涉及的术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本申请实施例的描述中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。The terms "first" and "second" involved in the embodiments of the present application are used for description purposes only, and cannot be understood as indicating or implying relative importance or implicitly indicating the quantity of indicated technical features. Thus, a feature defined as "first" and "second" may explicitly or implicitly include one or more of these features. In the description of the embodiments of the present application, words such as "exemplary" or "for example" are used as examples, illustrations or descriptions. Any embodiment or design scheme described as "exemplary" or "for example" in the embodiments of the present application shall not be interpreted as being more preferred or more advantageous than other embodiments or design schemes. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete manner.
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请中的技术领域的技术人员通常理解的含义相同。本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。应理解,本申请中除非另有说明,“/”表示或的意思。例如,A/B可以表示A或B。本申请中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系。例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B三种情况。“至少一个”是指一个或者多个。“多个”是指两个或多于两个。例如,a、b或c中的至少一个,可以表示:a,b,c,a和b,a和c,b和c,a、b和c七种情况。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the technical field of this application. The terms used in the description of the present application are only for the purpose of describing specific embodiments, and are not intended to limit the present application. It should be understood that unless otherwise stated in this application, "/" means or. For example, A/B can mean either A or B. The "and/or" in this application is only an association relationship describing associated objects, indicating that there may be three relationships. For example, A and/or B may mean: A exists alone, A and B exist simultaneously, and B exists alone. "At least one" means one or more. "A plurality" means two or more than two. For example, at least one of a, b or c can represent: a, b, c, a and b, a and c, b and c, a, b and c seven situations.
本申请实施例中的用户界面(User Interface,UI),是应用程序或操作系统与用户之间进行交互和信息交换的介质接口,可实现信息的内部形式与用户可以接受形式之间的转换。应用程序的用户界面是通过JAVA、可扩展标记语言(extensible markuplanguage,XML)等特定计算机语言编写的源代码,界面源代码在电子设备上经过解析、渲染,最终呈现为用户可以识别的内容,比如图片、文字、按钮等控件。控件(control),是用户界面的基本元素,典型的控件有按钮(button)、小工具(widget)、工具栏(toolbar)、菜单栏(menu bar)、文本框(text box)、滚动条(scrollbar)、图片(image)和文本(text)。界面中的控件的属性和内容是通过标签或者节点来定义的,比如XML通过<Textview>、<ImgView>、<VideoView>等节点来规定界面所包含的控件。一个节点对应界面中一个控件或属性,节点经过解析和渲染之后呈现为用户可视的内容。此外,很多应用程序,比如混合应用(hybridapplication)的界面中通常还包含有网页。网页,也称为页面,可以理解为内嵌在应用程序界面中的一个特殊的控件,网页是通过特定计算机语言编写的源代码,例如超文本标记语言(hyper text markup language,HTML),层叠样式表(cascading style sheets,CSS),JAVA脚本(JavaScript,JS)等,网页源代码可以由浏览器或与浏览器功能类似的网页显示组件加载和显示为用户可识别的内容。网页所包含的具体内容也是通过网页源代码中的标签或者节点来定义的,比如HTML通过<p>、<img>、<video>、<canvas>来定义网页的元素和属性。The user interface (UI) in the embodiment of the present application is a medium interface for interaction and information exchange between an application program or an operating system and a user, and can realize the conversion between the internal form of information and the form acceptable to the user. The user interface of the application program is the source code written in specific computer languages such as JAVA and extensible markup language (XML). Pictures, text, buttons and other controls. Control is the basic element of the user interface. Typical controls include buttons, widgets, toolbars, menu bars, text boxes, scroll bars ( scrollbar), pictures (image) and text (text). The properties and contents of the controls in the interface are defined through labels or nodes. For example, XML specifies the controls contained in the interface through nodes such as <Textview>, <ImgView>, and <VideoView>. A node corresponds to a control or property in the interface, and after the node is parsed and rendered, it is presented as the content visible to the user. In addition, the interfaces of many application programs, such as hybrid applications, usually include web pages. A web page, also called a page, can be understood as a special control embedded in the application program interface. A web page is a source code written in a specific computer language, such as hypertext markup language (hyper text markup language, HTML), cascading style Tables (cascading style sheets, CSS), JAVA scripts (JavaScript, JS), etc., and the source code of the web page can be loaded and displayed as user-recognizable content by the browser or a web page display component similar in function to the browser. The specific content contained in the webpage is also defined by the tags or nodes in the source code of the webpage. For example, HTML defines the elements and attributes of the webpage through <p>, <img>, <video>, and <canvas>.
用户界面常用的表现形式是图形用户界面(graphic user interface,GUI),是指采用图形方式显示的与计算机操作相关的用户界面。它可以是在电子设备的显示屏中显示的一个图标、窗口、控件等界面元素。The commonly used form of user interface is the graphical user interface (graphic user interface, GUI), which refers to the user interface related to computer operation displayed in a graphical way. It may be an icon, window, control and other interface elements displayed on the display screen of the electronic device.
在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。In the case of no conflict, the following embodiments and features in the embodiments can be combined with each other.
随着终端技术的发展,用户对电子设备中的视频拍摄功能的需求越来越高。目前,电子设备可以根据不同的拍摄场景设置多种视频模式,供用户选择而进行视频拍摄。然而,在实际使用过程中,用户可能不确定当前的拍摄场景,也不知道选择哪种视频模式,或者即使确定拍摄场景,也不知道如何选择对应的视频模式。例如,在拍摄包含有人像时,用户可能会选择人像模式,使得电子设备可以对视频中的人像进行处理,然而,当人像距离电子设备较远时,人脸在拍摄界面所占的比例可能较小,即使对人像进行处理,人像的优化效果也可能不佳,导致人像视频的拍摄效果不佳,从而影响用户体验。With the development of terminal technologies, users have higher and higher demands on video shooting functions in electronic devices. Currently, an electronic device can set multiple video modes according to different shooting scenarios for users to choose for video shooting. However, in actual use, the user may not be sure about the current shooting scene, and does not know which video mode to select, or even if the shooting scene is determined, the user does not know how to select the corresponding video mode. For example, when shooting a portrait, the user may select the portrait mode so that the electronic device can process the portrait in the video. However, when the portrait is far away from the electronic device, the proportion of the human face in the shooting interface may be smaller. Small, even if the portrait is processed, the optimization effect of the portrait may not be good, resulting in poor shooting effect of the portrait video, thereby affecting the user experience.
为了避免在拍摄人像视频场景时,由于用户选择的视频模式与人像不匹配而导致人像视频的拍摄效果不佳,本申请实施例提供一种视频拍摄方法,在拍摄人像视频时,可以基于人脸比例进行视频模式的推荐,从而自动生成效果更佳的人像视频,适应于用户的人像视频拍摄需求,有效提升了用户体验。In order to avoid the poor shooting effect of the portrait video due to the mismatch between the video mode selected by the user and the portrait when shooting the portrait video scene, the embodiment of the present application provides a video shooting method. The ratio of the video mode is recommended, so as to automatically generate a better portrait video, adapt to the user's portrait video shooting needs, and effectively improve the user experience.
为了更好地理解本申请实施例提供的视频拍摄方法,下面结合图1、图2对本申请实施例的视频拍摄方法的应用场景进行描述。In order to better understand the video shooting method provided in the embodiment of the present application, the application scenarios of the video shooting method in the embodiment of the present application are described below with reference to FIG. 1 and FIG. 2 .
参阅图1所示,在用户开启电子设备的相机应用程序,并选择录像,即拍摄视频时,电子设备在相机应用程序的拍摄界面上显示预览视频帧数据。在摄像头的拍摄范围内存在人时,预览的视频帧数据中也会包含人像,图1中的人像在预览界面内所占的比例较小。Referring to FIG. 1 , when the user starts the camera application of the electronic device and selects video recording, that is, when shooting a video, the electronic device displays preview video frame data on the shooting interface of the camera application. When there are people within the shooting range of the camera, the previewed video frame data will also include portraits, and the proportion of portraits in Figure 1 in the preview interface is relatively small.
电子设备在相机应用程序的预览界面上提供视频模式选择控件,当用户触发视频模式选择控件时,电子设备提供多种视频模式对应的控件供用户选择,用户可以触发控件以选择对应的视频模式,从而对预览视频帧数据和拍摄的视频进行优化处理。The electronic device provides a video mode selection control on the preview interface of the camera application program. When the user triggers the video mode selection control, the electronic device provides controls corresponding to multiple video modes for the user to select, and the user can trigger the control to select the corresponding video mode. In this way, the preview video frame data and the captured video are optimized.
参阅图2所示,当用户触发人像模式对应的控件时,电子设备自动对预览视频帧数据中的人像进行处理,例如美颜、背景虚化处理等。然而,由于人像在预览界面内所占的比例较小,不容易看到美颜效果,背景虚化也无法有效地突出人像主体,导致人像视频的拍摄效果不佳,从而影响用户体验。Referring to FIG. 2 , when the user triggers the control corresponding to the portrait mode, the electronic device automatically processes the portrait in the preview video frame data, such as beautification, background blur processing, and the like. However, due to the small proportion of portraits in the preview interface, it is not easy to see the beautification effect, and the blurred background cannot effectively highlight the subject of the portrait, resulting in poor portrait video shooting effect, which affects the user experience.
参阅图3所示,为本申请实施例提供的电子设备的软件架构图。分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。例如,将安卓系统分为四层,从上至下分别为应用程序层101,框架层102,安卓运行时(Androidruntime)和系统库103,硬件抽象层104,内核层105,硬件层106。Referring to FIG. 3 , it is a software architecture diagram of the electronic device provided by the embodiment of the present application. The layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces. For example, the Android system is divided into four layers, from top to bottom are application program layer 101, framework layer 102, Android runtime (Android runtime) and system library 103, hardware abstraction layer 104, kernel layer 105, and hardware layer 106.
应用程序层可以包括一系列应用程序包。例如,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息,设备控制服务等应用程序。The application layer can consist of a series of application packages. For example, an application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, and device control services.
框架层为应用程序层的应用程序提供应用编程接口(Application ProgrammingInterface,API)和编程框架。应用程序框架层包括一些预先定义的函数。例如,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。The framework layer provides application programming interfaces (Application Programming Interface, API) and programming frameworks for applications in the application layer. The application framework layer includes some predefined functions. For example, the application framework layer can include window managers, content providers, view systems, telephony managers, resource managers, notification managers, and more.
其中,窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。电话管理器用于提供电子设备的通信功能。例如通话状态的管理(包括接通,挂断等)。资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等。通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。Among them, the window manager is used to manage window programs. The window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, capture the screen, etc. Content providers are used to store and retrieve data and make it accessible to applications. Said data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebook, etc. The view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. The view system can be used to build applications. A display interface can consist of one or more views. For example, a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures. The phone manager is used to provide communication functions of electronic devices. For example, the management of call status (including connected, hung up, etc.). The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, etc. The notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and can automatically disappear after a short stay without user interaction. For example, the notification manager is used to notify the download completion, message reminder, etc. The notification manager can also be a notification that appears on the top status bar of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, issuing a prompt sound, vibrating the electronic device, and flashing the indicator light, etc.
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。Android Runtime includes core library and virtual machine. The Android runtime is responsible for the scheduling and management of the Android system. The core library consists of two parts: one part is the function function that the java language needs to call, and the other part is the core library of Android.
应用程序层和框架层运行在虚拟机中。虚拟机将应用程序层和框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。The application layer and the framework layer run in virtual machines. The virtual machine executes the java files of the application layer and the framework layer as binary files. The virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
系统库可以包括多个功能模块。例如,表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如SGL)等。A system library can include multiple function modules. For example, surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg SGL), etc.
其中,表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如: MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。2D图形引擎是2D绘图的绘图引擎。Among them, the surface manager is used to manage the display subsystem, and provides the fusion of 2D and 3D layers for multiple applications. The media library supports playback and recording of various commonly used audio and video formats, as well as still image files, etc. The media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc. The 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing, etc. 2D graphics engine is a drawing engine for 2D drawing.
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。The kernel layer is the layer between hardware and software. The kernel layer includes at least a display driver, a camera driver, an audio driver, and a sensor driver.
内核层是电子设备的操作系统的核心,是基于硬件的第一层软件扩充,提供操作系统最基本的功能,是操作系统工作的基础,负责管理系统的进程、内存、设备驱动程序、文件和网络系统,决定了系统的性能和稳定性。例如,内核可以决定一个应用程序对某部分硬件的操作时间。The kernel layer is the core of the operating system of an electronic device. It is the first layer of software expansion based on hardware. It provides the most basic functions of the operating system and is the basis of the operating system. It is responsible for managing the process, memory, device drivers, files and The network system determines the performance and stability of the system. For example, the kernel can determine when an application operates on certain parts of the hardware.
内核层包括与硬件紧密相关的程序,例如中断处理程序、设备驱动程序等,还包括基本的、公共的、运行频率较高的模块,例如时钟管理模块、进程调度模块等,还包括关键性的数据结构。内核层可以设置于处理器中,或固化在内部存储器。The kernel layer includes programs closely related to hardware, such as interrupt handlers, device drivers, etc., as well as basic, public, and high-frequency modules, such as clock management modules, process scheduling modules, etc., as well as key data structure. The kernel layer can be set in the processor, or solidified in the internal memory.
硬件层包括电子设备的多个硬件设备,例如摄像头、显示屏等。The hardware layer includes multiple hardware devices of the electronic device, such as a camera, a display screen, and the like.
参阅图4所示,为本申请一实施例提供的视频拍摄方法的流程图。所述方法应用于电子设备中,所述视频拍摄方法包括:Referring to FIG. 4 , it is a flowchart of a video shooting method provided by an embodiment of the present application. The method is applied in an electronic device, and the video shooting method includes:
S101,获取摄像头拍摄的视频帧数据,识别视频帧数据是否包含人脸。若视频帧数据包含人脸,流程进入S102;若视频帧数据不包含人脸,流程继续S101。S101. Acquire video frame data captured by a camera, and identify whether the video frame data includes a human face. If the video frame data contains a human face, the process proceeds to S102; if the video frame data does not contain a human face, the process continues to S101.
如图3所示,在本申请的一实施例中,硬件层106包括图像处理器1061,图像处理器1061包括,但不限于,图像前端(Image Front End,IFE)1062和图像处理引擎(ImageProcessing Engine,IPE)1063。图像处理器1061通过移动行业处理器接口(MobileIndustry Processor Interface,MIPI)与摄像头193通信。摄像头193包括,但不限于,镜头和图像传感器。镜头用于采集摄像头拍摄范围内的光信号,图像传感器用于将镜头采集的光信号转换为电信号,得到图像数据或视频帧数据。其中,图像传感器得到的图像数据为RAW图像,图像传感器得到的视频帧数据为RAW视频帧图像。As shown in FIG. 3, in an embodiment of the present application, the hardware layer 106 includes an image processor 1061, and the image processor 1061 includes, but not limited to, an image front end (Image Front End, IFE) 1062 and an image processing engine (Image Processing Engine, IPE) 1063. The image processor 1061 communicates with the camera 193 through a mobile industry processor interface (Mobile Industry Processor Interface, MIPI). Camera 193 includes, but is not limited to, a lens and an image sensor. The lens is used to collect light signals within the shooting range of the camera, and the image sensor is used to convert the light signals collected by the lens into electrical signals to obtain image data or video frame data. Wherein, the image data obtained by the image sensor is a RAW image, and the video frame data obtained by the image sensor is a RAW video frame image.
参阅图5所示,为本申请一实施例提供的视频拍摄系统的架构示意图。视频拍摄系统10包括,但不限于,图像前端1062、图像处理引擎1063、模式切换模块11、焦外处理模块12。Referring to FIG. 5 , it is a schematic structural diagram of a video shooting system provided by an embodiment of the present application. The video capture system 10 includes, but is not limited to, an image front end 1062 , an image processing engine 1063 , a mode switching module 11 , and an out-of-focus processing module 12 .
在本申请的一实施例中,获取摄像头拍摄的视频帧数据,识别视频帧数据是否包含人脸包括:响应于用户的预设操作,获取摄像头拍摄的视频帧数据,对摄像头拍摄的视频帧数据进行格式转换,得到第一视频流,对第一视频流进行场景分析,识别视频帧数据是否包含人脸。其中,用户的预设操作可以是开启相机应用程序的操作、开启相机应用程序并开启视频功能的操作、或开启相机应用程序并触发视频拍摄的操作。In an embodiment of the present application, acquiring video frame data captured by a camera, and identifying whether the video frame data contains a human face includes: acquiring video frame data captured by a camera in response to a preset operation by the user, and analyzing the video frame data captured by the camera Perform format conversion to obtain the first video stream, perform scene analysis on the first video stream, and identify whether the video frame data contains a human face. Wherein, the user's preset operation may be an operation of opening a camera application, an operation of opening a camera application and enabling a video function, or an operation of opening a camera application and triggering video shooting.
具体地,通过图像前端对摄像头拍摄的视频帧数据进行格式转换,得到第一视频流。在本申请的一实施例中,摄像头拍摄得到的视频帧数据包括多个视频帧图像,摄像头初始拍摄得到的视频帧图像为RAW格式,通过图像前端对视频帧数据中的每个RAW视频帧图像进行格式转换得到BMP(Bitmap,位图)格式的视频帧图像,得到第一视频流,并将第一视频流传输至图像处理引擎。其中,第一视频流为微小(tiny)流。Specifically, format conversion is performed on the video frame data captured by the camera through the image front end to obtain the first video stream. In an embodiment of the present application, the video frame data captured by the camera includes a plurality of video frame images, the video frame images initially captured by the camera are in RAW format, and each RAW video frame image in the video frame data is processed by the image front end Perform format conversion to obtain a video frame image in BMP (Bitmap, bitmap) format, obtain a first video stream, and transmit the first video stream to an image processing engine. Wherein, the first video stream is a tiny (tiny) stream.
在本申请的一实施例中,图像处理引擎中运行有场景识别模块,图像处理引擎接收到第一视频流后,场景识别模块采用AI场景检测算法识别第一视频流中的场景,判断视频帧数据是否包含人脸。具体地,场景识别模块对第一视频流中的每个视频帧图像进行人脸识别,判断视频帧数据是否包含人脸,若识别到连续预设数量的视频帧图像包含人脸,确定视频帧数据包含人脸;若未识别到连续预设数量的视频帧图像包含人脸,确定视频帧数据不包含人脸。其中,预设数量为5。在其他实施例中,预设数量也可以根据需求设置为其他数值。In one embodiment of the present application, a scene recognition module runs in the image processing engine, and after the image processing engine receives the first video stream, the scene recognition module adopts an AI scene detection algorithm to recognize the scene in the first video stream, and judge the video frame Whether the data contains faces. Specifically, the scene recognition module performs face recognition on each video frame image in the first video stream, and judges whether the video frame data contains a human face; The data contains a human face; if it is not recognized that a predetermined number of continuous video frame images contain a human face, it is determined that the video frame data does not contain a human face. Wherein, the preset number is 5. In other embodiments, the preset number can also be set to other values according to requirements.
在本申请的一实施例中,场景识别模块采用AI场景检测算法识别第一视频流中的场景,判断视频帧数据是否包含人脸包括:场景识别模块采用级联分类器识别第一视频流中的场景,判断视频帧数据是否包含人脸。级联分类器由多个强分类器级联而成,每个强分类器又由一定数量的弱分类器通过ADABOOST算法(通过迭代弱分类器而产生最终的强分类器的算法)形成。其中,弱分类器用于提取图像的Harr-like矩形特征,矩形特征指具有黑色区域以及白色区域的矩形,可包括原始矩形特征和扩展矩形特征。具体地,选取任一个矩形放置于视频帧图像上,然后利用白色区域的像素和减去黑色区域的像素和,得到的值为矩形特征的特征值。如果将矩形特征放在视频帧图像的人脸区域与非人脸区域,计算出的特征值是不同,因此可以基于矩形特征的特征值判断视频帧图像中放置有矩形特征的区域是否为人脸区域,进而判断视频帧图像是否包含人脸。In an embodiment of the present application, the scene identification module uses an AI scene detection algorithm to identify the scene in the first video stream, and judging whether the video frame data contains a human face includes: the scene identification module uses a cascade classifier to identify the scene in the first video stream In the scene, judge whether the video frame data contains a human face. The cascade classifier is formed by cascading multiple strong classifiers, and each strong classifier is formed by a certain number of weak classifiers through the ADABOOST algorithm (an algorithm that generates the final strong classifier by iterating weak classifiers). Among them, the weak classifier is used to extract the Harr-like rectangular feature of the image, and the rectangular feature refers to a rectangle with a black area and a white area, which can include the original rectangular feature and the extended rectangular feature. Specifically, any rectangle is selected and placed on the video frame image, and then the pixel sum of the black area is subtracted from the pixel sum of the white area, and the obtained value is the feature value of the rectangle feature. If the rectangular feature is placed in the face area and non-face area of the video frame image, the calculated feature values are different, so it can be judged based on the feature value of the rectangular feature whether the area where the rectangular feature is placed in the video frame image is a face area , and then determine whether the video frame image contains a human face.
若通过级联分类器可以在视频帧图像中识别出人脸区域,确定视频帧图像包含人脸,若通过级联分类器未在视频帧图像中识别出人脸区域,确定视频帧图像不包含人脸。若连续预设数量的视频帧图像包含人脸,确定摄像头拍摄得到的视频帧数据包含人脸。参阅图6所示,为本申请实施例提供的利用级联分类器识别人脸图像的效果图,矩形框部分即为人脸区域。If the face area can be identified in the video frame image by the cascade classifier, it is determined that the video frame image contains a human face; if the face area is not identified in the video frame image by the cascade classifier, it is determined that the video frame image does not contain human face. If a predetermined number of continuous video frame images contain human faces, it is determined that the video frame data captured by the camera contains human faces. Referring to FIG. 6 , it is an effect diagram of using a cascade classifier to recognize a face image provided by the embodiment of the present application, and the rectangular frame part is the face area.
在其他实施例中,也可以基于模板匹配的人脸检测、基于外观形状的人脸检测、基于神经网络的人脸检测、基于特征的人脸检测、基于肤色的人脸检测等人脸检测方法检测视频帧图像中是否包含人脸。In other embodiments, face detection methods such as face detection based on template matching, face detection based on appearance shape, face detection based on neural network, face detection based on feature, face detection based on skin color, etc. can also be used. Detect whether a video frame image contains a human face.
S102,计算人脸区域的尺寸在视频帧数据中所占的比例。S102. Calculate the proportion of the size of the face area in the video frame data.
在本申请的一实施例中,若视频帧数据包含一个人脸,计算该一个人脸对应的人脸区域的尺寸在视频帧数据中所占的比例;若视频帧数据包含多个人脸,计算所有人脸对应的人脸区域的尺寸在视频帧数据中所占的比例。In an embodiment of the present application, if the video frame data includes a human face, calculate the proportion of the size of the human face region corresponding to the human face in the video frame data; if the video frame data includes multiple human faces, calculate The proportion of the size of the face area corresponding to all faces in the video frame data.
在本申请的一实施例中,计算人脸区域的尺寸在视频帧数据中所占的比例的细化流程如图7所示,具体包括:In one embodiment of the present application, the detailed process for calculating the proportion of the size of the face area in the video frame data is shown in Figure 7, specifically including:
S1021,采用矩形框对视频帧数据中识别出的人脸区域进行标识。S1021. Use a rectangular frame to identify the face area identified in the video frame data.
在本申请的一实施例中,场景识别模块在识别出摄像头拍摄的视频帧数据包含人脸后,还采用矩形框对视频帧数据中识别出的人脸区域进行标识。矩形框可以是将人脸区域包围在内的最小矩形。In an embodiment of the present application, after the scene recognition module recognizes that the video frame data captured by the camera contains a human face, it also uses a rectangular frame to mark the recognized face area in the video frame data. The rectangular frame may be the smallest rectangle enclosing the face area.
具体地,场景识别模块在人脸检测的过程中,同时计算人脸坐标,基于人脸区域上、下、左、右四个端点的坐标确定将人脸区域包含在内的最小矩形框,最小矩形框的横向边缘沿水平方向延伸,纵向边缘沿竖直方向延伸。Specifically, in the process of face detection, the scene recognition module calculates the coordinates of the face at the same time, and determines the minimum rectangular frame that includes the face area based on the coordinates of the upper, lower, left, and right endpoints of the face area. The lateral edges of the rectangular frame extend horizontally, and the longitudinal edges extend vertically.
S1022,基于矩形框确定人脸区域的尺寸。S1022. Determine the size of the face area based on the rectangular frame.
在本申请的一实施例中,人脸区域的尺寸可以包括人脸区域的宽度值w、高度值h及面积s,面积s为宽度值w和高度值h的乘积,宽度值、高度值及面积的单位均为像素点数量。基于矩形框确定人脸区域的尺寸包括:将视频帧图像中标识人脸区域的矩形框的宽度值确定为人脸区域的宽度值,将矩形框的高度值确定为人脸区域的高度值。In an embodiment of the present application, the size of the human face area may include the width value w, the height value h and the area s of the human face area, the area s is the product of the width value w and the height value h, the width value, the height value and The unit of area is the number of pixels. Determining the size of the human face area based on the rectangular frame includes: determining the width value of the rectangular frame identifying the human face area in the video frame image as the width value of the human face area, and determining the height value of the rectangular frame as the height value of the human face area.
S1023,计算人脸区域的尺寸与视频帧图像的尺寸之间的比例,得到人脸区域的尺寸在视频帧数据中所占的比例。S1023. Calculate the ratio between the size of the face area and the size of the video frame image to obtain the ratio of the size of the face area in the video frame data.
在本申请的一实施例中,视频帧图像的尺寸可以包括视频帧图像的宽度值W、高度值H及面积S,面积S为宽度值W和高度值H的乘积,视频帧图像的宽度值W、高度值H及面积S均为预设值,宽度值W为视频帧图像的横向边缘的像素点数量,高度值H为视频帧图像的纵向边缘的像素点数量,面积S为视频帧图像的像素点数量。例如,视频帧图像的分辨率为640*480,视频帧图像的横向边缘的像素点数量为640,视频帧图像的纵向边缘的像素点数量为480,则视频帧图像的像素点数量为307200。In an embodiment of the present application, the size of the video frame image may include a width value W, a height value H, and an area S of the video frame image, where the area S is the product of the width value W and the height value H, and the width value of the video frame image W, the height value H and the area S are all preset values, the width value W is the number of pixels on the horizontal edge of the video frame image, the height value H is the number of pixels on the vertical edge of the video frame image, and the area S is the video frame image the number of pixels. For example, if the resolution of the video frame image is 640*480, the number of pixels of the horizontal edge of the video frame image is 640, and the number of pixels of the vertical edge of the video frame image is 480, then the number of pixels of the video frame image is 307200.
在本申请的一实施例中,计算人脸区域的尺寸与视频帧图像的尺寸之间的比例包括:计算人脸区域的宽度值与视频帧图像的宽度值之间的比例。例如,人脸区域的宽度值为400,视频帧图像的宽度值为640,则人脸区域的宽度值与视频帧图像的宽度值之间的比例Rw=400/640=0.625。In an embodiment of the present application, calculating the ratio between the size of the human face area and the size of the video frame image includes: calculating the ratio between the width value of the human face area and the width value of the video frame image. For example, the width value of the face area is 400, and the width value of the video frame image is 640, then the ratio R w =400/640=0.625 between the width value of the face area and the width value of the video frame image.
在本申请的另一实施例中,计算人脸区域的尺寸与视频帧图像的尺寸之间的比例包括:计算人脸区域的高度值与视频帧图像的高度值之间的比例。例如,人脸区域的高度值为240,视频帧图像的宽度值为480,则人脸区域的宽度值与视频帧图像的宽度值之间的比例Rh=240/480=0.5。In another embodiment of the present application, calculating the ratio between the size of the human face area and the size of the video frame image includes: calculating the ratio between the height value of the human face area and the height value of the video frame image. For example, the height value of the face area is 240, and the width value of the video frame image is 480, then the ratio between the width value of the face area and the width value of the video frame image is R h =240/480=0.5.
在本申请的另一实施例中,计算人脸区域的尺寸与视频帧图像的尺寸之间的比例包括:计算人脸区域的面积与视频帧图像的面积之间的比例。例如,人脸区域的面积为10500,视频帧图像的面积为307200,则人脸区域的宽度值与视频帧图像的宽度值之间的比例Rs=10500/307200=0.34。In another embodiment of the present application, calculating the ratio between the size of the human face area and the size of the video frame image includes: calculating the ratio between the area of the human face area and the area of the video frame image. For example, if the area of the face area is 10500 and the area of the video frame image is 307200, then the ratio R s between the width value of the face area and the width value of the video frame image is R s =10500/307200=0.34.
S103,判断人脸区域的尺寸在视频帧数据中所占的比例是否大于或等于第一预设值。若人脸区域的尺寸在视频帧数据中所占的比例大于或等于第一预设值,流程进入S104;若人脸区域的尺寸在视频帧数据中所占的比例小于第一预设值,流程进入S105。S103. Determine whether the proportion of the size of the human face area in the video frame data is greater than or equal to a first preset value. If the proportion of the size of the human face area in the video frame data is greater than or equal to the first preset value, the process enters S104; if the proportion of the size of the human face area in the video frame data is smaller than the first preset value, The flow goes to S105.
在本申请的一实施例中,第一预设值为1/3。在其他实施例中,第一预设值也可以根据需求设置为其他数值。In an embodiment of the present application, the first preset value is 1/3. In other embodiments, the first preset value can also be set to other values according to requirements.
在本申请的另一实施例中,判断连续预设数量的视频帧图像中人脸区域的尺寸在视频帧数据中所占的比例是否大于或等于第一预设值,若连续预设数量的视频帧图像中人脸区域的尺寸在视频帧数据中所占的比例大于或等于第一预设值,流程进入S104。In another embodiment of the present application, it is determined whether the proportion of the size of the human face area in the video frame data of the continuous preset number of video frame images is greater than or equal to the first preset value, if the continuous preset number of The ratio of the size of the human face area in the video frame image to the video frame data is greater than or equal to the first preset value, and the process enters S104.
在本申请的另一实施例中,判断连续预设数量的视频帧图像中人脸数量是否为一个,以及判断连续预设数量的视频帧图像中人脸区域的尺寸在视频帧数据中所占的比例是否大于或等于第一预设值,若连续预设数量的视频帧图像中人脸数量为一个,且连续预设数量的视频帧图像中人脸区域的尺寸在视频帧数据中所占的比例大于或等于第一预设值,流程进入S104。In another embodiment of the present application, it is judged whether the number of human faces in the continuous preset number of video frame images is one, and it is judged that the size of the face area in the continuous preset number of video frame images occupies in the video frame data. Whether the proportion of is greater than or equal to the first preset value, if the number of faces in the continuous preset number of video frame images is one, and the size of the face area in the continuous preset number of video frame images occupies in the video frame data The ratio is greater than or equal to the first preset value, and the process goes to S104.
S104,确定推荐的视频模式为人像模式,并基于人像模式进行视频拍摄。S104. Determine that the recommended video mode is the portrait mode, and perform video shooting based on the portrait mode.
如图5所示,在本申请的一实施例中,在作出推荐决策,确定推荐的视频模式为人像模式后,在相机应用程序界面显示提示控件,提示控件上显示文字“推荐使用人像模式”,响应于用户触发提示控件的操作,模式切换模块将当前的视频模式切换为人像模式,基于人像模式进行视频拍摄。As shown in Figure 5, in one embodiment of the present application, after making a recommendation decision and determining that the recommended video mode is portrait mode, a prompt control is displayed on the camera application interface, and the text "recommended to use portrait mode" is displayed on the prompt control , in response to the user triggering the operation of the prompt control, the mode switching module switches the current video mode to the portrait mode, and performs video shooting based on the portrait mode.
在本申请的一实施例中,基于人像模式进行视频拍摄包括:切换至光圈最大的镜头进行视频拍摄,或将当前拍摄视频的镜头的光圈提高为最大值。In an embodiment of the present application, the video shooting based on the portrait mode includes: switching to the lens with the largest aperture for video shooting, or increasing the aperture of the lens currently shooting the video to the maximum value.
在本申请的另一实施例中,基于人像模式进行视频拍摄包括:对摄像头拍摄的视频帧数据进行虚化处理,并将人像虚化处理后的视频帧数据显示于显示屏。具体地,摄像头将拍摄得到的视频帧数据传输至图像前端,通过图像前端对视频帧数据进行格式转换,将RAW格式视频帧数据转换为YUV格式视频帧数据,生成第二视频流,并将第二视频流传输至图像处理引擎,第二视频流为预览流,图像处理引擎通过焦外处理模块对第二视频流中的视频帧图像进行虚化处理,并将虚化处理后的视频帧数据显示于显示屏。In another embodiment of the present application, the video shooting based on the portrait mode includes: performing blur processing on the video frame data captured by the camera, and displaying the video frame data after the portrait blur processing on the display screen. Specifically, the camera transmits the captured video frame data to the image front end, converts the format of the video frame data through the image front end, converts the RAW format video frame data into the YUV format video frame data, generates the second video stream, and converts the first The second video stream is transmitted to the image processing engine, and the second video stream is a preview stream. The image processing engine performs blurring processing on the video frame images in the second video stream through the out-of-focus processing module, and blurs the processed video frame data shown on the display.
在本申请的一实施例中,对摄像头拍摄的视频帧数据进行虚化处理的细化流程如图8所示,具体包括:In an embodiment of the present application, the refinement process of performing blurring processing on the video frame data captured by the camera is shown in FIG. 8 , which specifically includes:
S1041,对视频帧图像进行人像抠图,提取视频帧图像中的人像区域。S1041. Perform portrait matting on the video frame image, and extract a portrait area in the video frame image.
在本申请的一实施例中,虚化处理通过焦外处理模块基于bokeh(背景虚化)算法实现。对视频帧图像进行人像抠图,提取视频帧图像中的人像区域包括:将视频帧图像输入人像抠图模型,通过人像抠图模型提取出视频帧图像中的人像区域。其中,人像抠图模型可以是FCN((Fully Convolutional Networks,全卷积神经网络)、语义分割网络SegNet、稠密预测网络Unet。In an embodiment of the present application, the blurring process is implemented by the out-of-focus processing module based on a bokeh (background blurring) algorithm. Performing portrait matting on the video frame image, and extracting the portrait area in the video frame image includes: inputting the video frame image into a portrait matting model, and extracting the portrait area in the video frame image through the portrait matting model. Among them, the portrait matting model can be FCN ((Fully Convolutional Networks, fully convolutional neural network), semantic segmentation network SegNet, and dense prediction network Unet.
S1042,对视频帧图像的背景区域进行虚化处理。S1042. Perform blurring processing on the background area of the video frame image.
在本申请的一实施例中,视频帧图像的背景区域为视频帧图像中除人像区域之外的区域。对视频帧图像中的背景区域进行虚化处理包括:对背景区域进行高斯模糊处理,得到虚化的背景区域。In an embodiment of the present application, the background area of the video frame image is an area in the video frame image other than the portrait area. Performing blur processing on the background area in the video frame image includes: performing Gaussian blur processing on the background area to obtain a blurred background area.
具体地,对背景区域进行高斯模糊处理包括:预设二维高斯分布函数的均值和标准差,将背景区域划分为多个n*n预设区域,将每个n*n预设区域内的每个像素点的坐标输入二维高斯分布函数,得到二维高斯分布函数的输出值,并将每个像素点对应的输出值除以预设区域内所有像素点对应的输出值之和,得到预设区域内每个像素点的权重,将像素点的RGB三通道像素值分别乘以权重,得到高斯模糊处理后的像素值,将像素点的初始像素值替换为高斯模糊处理后的像素值,得到高斯模糊处理后的像素点,将多个n*n预设区域内高斯模糊处理后的像素点组成的图像确定为虚化处理后的视频帧图像。其中,n为模糊半径,取值可以是任意正整数。可选地,二维高斯分布函数的均值为0,标准差为1.5。Specifically, performing Gaussian blur processing on the background area includes: presetting the mean and standard deviation of the two-dimensional Gaussian distribution function, dividing the background area into a plurality of n*n preset areas, and dividing the The coordinates of each pixel are input into the two-dimensional Gaussian distribution function to obtain the output value of the two-dimensional Gaussian distribution function, and the output value corresponding to each pixel is divided by the sum of the output values corresponding to all pixels in the preset area to obtain The weight of each pixel in the preset area. Multiply the RGB three-channel pixel value of the pixel by the weight respectively to obtain the pixel value after Gaussian blur processing, and replace the initial pixel value of the pixel point with the pixel value after Gaussian blur processing , to obtain pixels after Gaussian blur processing, and determine an image composed of pixels after Gaussian blur processing in a plurality of n*n preset regions as a video frame image after blurring processing. Among them, n is the blur radius, and its value can be any positive integer. Optionally, the mean value of the two-dimensional Gaussian distribution function is 0, and the standard deviation is 1.5.
S1043,将提取的人像区域与虚化的背景区域融合。S1043, merging the extracted portrait area with the blurred background area.
在本申请的一实施例中,将提取的人像区域放置于初始的人像位置,对提取的人像区域与虚化的背景区域进行合并,使得人像区域与虚化的背景区域融合。In an embodiment of the present application, the extracted portrait area is placed at the initial portrait position, and the extracted portrait area and the blurred background area are merged so that the portrait area and the blurred background area are merged.
S105,判断人脸区域的尺寸在视频帧数据中所占的比例是否小于或等于第二预设值,第二预设值小于第一预设值。若人脸区域的尺寸在视频帧数据中所占的比例小于或等于第二预设值,流程进入S106;若人脸区域的尺寸在视频帧数据中所占的比例大于第二预设值,流程返回S101。其中,第二预设值为1/5。在其他实施例中,第二预设值也可以根据需求设置为其他数值。S105. Determine whether the proportion of the size of the human face area in the video frame data is less than or equal to a second preset value, and the second preset value is smaller than the first preset value. If the proportion of the size of the human face area in the video frame data is less than or equal to the second preset value, the process enters S106; if the proportion of the size of the human face area in the video frame data is greater than the second preset value, The process returns to S101. Wherein, the second preset value is 1/5. In other embodiments, the second preset value can also be set to other values according to requirements.
在本申请的另一实施例中,判断连续预设数量的视频帧图像中人脸区域的尺寸在视频帧数据中所占的比例是否小于或等于第二预设值,若连续预设数量的视频帧图像中人脸区域的尺寸在视频帧数据中所占的比例小于或等于第二预设值,流程进入S106。In another embodiment of the present application, it is judged whether the proportion of the size of the face area in the video frame data of the continuous preset number of video frame images is less than or equal to the second preset value, if the continuous preset number of The ratio of the size of the face area in the video frame image to the video frame data is less than or equal to the second preset value, and the process goes to S106.
在本申请的另一实施例中,判断连续预设数量的视频帧图像中人脸数量是否为一个,以及判断连续预设数量的视频帧图像中人脸区域的尺寸在视频帧数据中所占的比例是否小于或等于第二预设值,若连续预设数量的视频帧图像中人脸数量为多个,且连续预设数量的视频帧图像中人脸区域的尺寸在视频帧数据中所占的比例小于或等于第二预设值,流程进入S106。In another embodiment of the present application, it is judged whether the number of human faces in the continuous preset number of video frame images is one, and it is judged that the size of the face area in the continuous preset number of video frame images occupies in the video frame data. Whether the ratio is less than or equal to the second preset value, if the number of faces in the continuous preset number of video frame images is multiple, and the size of the face area in the continuous preset number of video frame images is greater than that specified in the video frame data If the proportion is less than or equal to the second preset value, the process goes to S106.
S106,确定推荐的视频模式为主角模式,并基于主角模式进行视频拍摄。S106. Determine that the recommended video mode is the protagonist mode, and perform video shooting based on the protagonist mode.
在本申请的一实施例中,在作出推荐决策,确定推荐的视频模式为主角模式后,在相机应用程序界面显示提示控件,提示控件上显示文字“推荐使用主角模式”,响应于用户触发提示控件的操作,模式切换模块将当前的视频模式切换为主角模式,基于主角模式进行视频拍摄。其中,主角为对焦位置对应的人像,对焦位置基于用户的触控选择进行确定。In one embodiment of the present application, after a recommendation decision is made and the recommended video mode is determined to be the protagonist mode, a prompt control is displayed on the camera application interface, and the prompt control displays the text "recommended to use the protagonist mode", in response to the user triggering the prompt For the operation of the controls, the mode switching module switches the current video mode to the protagonist mode, and performs video shooting based on the protagonist mode. Wherein, the protagonist is the portrait corresponding to the focus position, and the focus position is determined based on the user's touch selection.
在主角模式下,拍摄界面同时显示人像视频帧数据和全景视频帧数据,人像视频帧数据和全景视频帧数据叠加,以画中画的形式进行显示。在本申请的一实施例中,基于主角模式进行视频拍摄包括:控制一摄像头拍摄主角人像视频,控制另一摄像头拍摄全景视频,在拍摄界面显示全景视频的视频帧数据,将人像视频的视频帧数据以画中画的形式叠加显示在全景视频的视频帧数据中。其中,拍摄主角人像视频的摄像头为主摄像头或长焦摄像头(焦距最长的摄像头),可以跟踪拍摄主角,拍摄全景视频的摄像头为广角摄像头。在其他实施例中,拍摄人像视频和全景视频的摄像头也可以是同一摄像头,将摄像头拍摄的全景视频帧数据中的主角人像部分截取和放大,以画中画形式进行显示。In the protagonist mode, the shooting interface simultaneously displays portrait video frame data and panoramic video frame data, and the portrait video frame data and panoramic video frame data are superimposed and displayed in the form of a picture-in-picture. In an embodiment of the present application, video shooting based on the main character mode includes: controlling one camera to shoot a portrait video of the main character, controlling another camera to shoot a panoramic video, displaying the video frame data of the panoramic video on the shooting interface, and converting the video frame of the portrait video The data is superimposed and displayed in the video frame data of the panoramic video in the form of a picture-in-picture. Among them, the camera that shoots the portrait video of the protagonist is the main camera or the telephoto camera (the camera with the longest focal length), which can track and shoot the protagonist, and the camera that shoots the panoramic video is a wide-angle camera. In other embodiments, the camera that shoots the portrait video and the panoramic video may also be the same camera, and the portrait of the protagonist in the frame data of the panoramic video captured by the camera is intercepted and enlarged, and displayed in a picture-in-picture format.
具体地,参阅图9所示,摄像头将拍摄的视频帧数据传输至图像前端,图像前端首先将视频帧数据转换为第一视频流(tiny流),对第一视频流采用AI场景检测算法进行场景分析,若场景分析结果为人脸区域的尺寸在视频帧数据中所占的比例小于或等于第二预设值,根据场景分析结果做出主角模式的推荐决策,并响应于用户操作将视频模式切换为主角模式。视频模式切换为主角模式后,图像前端将摄像头拍摄的视频帧数据转换为第二视频流(预览流),并将第二视频流传输至图像处理引擎,图像处理引擎对第二视频流进行优化处理,例如防抖、降噪、色彩校正等。图像前端对视频帧视频进行格式转换后,还截取出视频帧数据中的主角人像视频帧数据,对主角人像视频帧数据进行放大,例如放大一倍,生成第三视频流,第三视频流为放大跟踪主体的视频流,图像前端将第三视频流传输至图像处理引擎,图像处理引擎对第三视频流进行优化处理,例如防抖、降噪、色彩校正等。图像处理引擎将处理后的第二视频流和第三视频流拼接,并将拼接后的视频流显示于显示屏,使得第二视频流完整显示,第三视频流以画中画的形式进行显示。Specifically, as shown in Figure 9, the camera transmits the captured video frame data to the image front-end, and the image front-end first converts the video frame data into the first video stream (tiny stream), and uses the AI scene detection algorithm for the first video stream. Scene analysis, if the scene analysis result shows that the proportion of the size of the face area in the video frame data is less than or equal to the second preset value, make a recommendation decision for the protagonist mode according to the scene analysis result, and respond to the user operation. Switch to protagonist mode. After the video mode is switched to the protagonist mode, the image front end converts the video frame data captured by the camera into the second video stream (preview stream), and transmits the second video stream to the image processing engine, which optimizes the second video stream Processing, such as anti-shake, noise reduction, color correction, etc. After the image front-end converts the format of the video frame video, it also intercepts the video frame data of the protagonist portrait in the video frame data, and enlarges the video frame data of the protagonist portrait, for example, doubles it to generate a third video stream. The third video stream is Enlarge and track the video stream of the subject, the image front end transmits the third video stream to the image processing engine, and the image processing engine optimizes the third video stream, such as anti-shake, noise reduction, color correction, etc. The image processing engine splices the processed second video stream and the third video stream, and displays the spliced video stream on the display screen, so that the second video stream is completely displayed, and the third video stream is displayed in the form of picture-in-picture .
参阅图10所示,为本申请另一实施例提供的视频拍摄方法的流程图。所述方法应用于电子设备中,所述视频拍摄方法包括:Referring to FIG. 10 , it is a flowchart of a video shooting method provided by another embodiment of the present application. The method is applied in an electronic device, and the video shooting method includes:
S201,获取摄像头拍摄的视频帧数据,识别视频帧数据是否包含人脸。若视频帧数据包含人脸,流程进入S202;若视频帧数据不包含人脸,流程进入S206。S201. Acquire video frame data captured by a camera, and identify whether the video frame data includes a human face. If the video frame data contains a human face, the process enters S202; if the video frame data does not contain a human face, the process enters S206.
S202,计算人脸区域的尺寸在视频帧数据中所占的比例。S202. Calculate the proportion of the size of the face area in the video frame data.
S203,判断人脸区域的尺寸在视频帧数据中所占的比例是否大于或等于第一预设值。若人脸区域的尺寸在视频帧数据中所占的比例大于或等于第一预设值,流程进入S204;若人脸区域的尺寸在视频帧数据中所占的比例小于第一预设值,流程进入S205。S203. Determine whether the proportion of the size of the human face area in the video frame data is greater than or equal to a first preset value. If the proportion of the size of the human face area in the video frame data is greater than or equal to the first preset value, the process enters S204; if the proportion of the size of the human face area in the video frame data is smaller than the first preset value, The flow goes to S205.
S204,确定推荐的视频模式为人像模式,并基于人像模式进行视频拍摄。S204. Determine that the recommended video mode is the portrait mode, and perform video shooting based on the portrait mode.
S205,判断人脸区域的尺寸在视频帧数据中所占的比例是否小于或等于第二预设值,第二预设值小于第一预设值。若人脸区域的尺寸在视频帧数据中所占的比例小于或等于第二预设值,流程进入S206;若人脸区域的尺寸在视频帧数据中所占的比例大于第二预设值,流程进入S207。S205. Determine whether the proportion of the size of the human face area in the video frame data is less than or equal to a second preset value, and the second preset value is smaller than the first preset value. If the proportion of the size of the human face area in the video frame data is less than or equal to the second preset value, the process enters S206; if the proportion of the size of the human face area in the video frame data is greater than the second preset value, The flow goes to S207.
S206,确定推荐的视频模式为主角模式,并基于主角模式进行视频拍摄。S206. Determine that the recommended video mode is the protagonist mode, and perform video shooting based on the protagonist mode.
S207,识别视频帧数据的场景信息是否为夜景场景。若视频帧数据的场景信息为夜景场景,流程进入S208;若视频帧数据的场景信息不是夜景场景,流程返回S201。S207. Identify whether the scene information of the video frame data is a night scene. If the scene information of the video frame data is a night scene, the process goes to S208; if the scene information of the video frame data is not a night scene, the process returns to S201.
在本申请的一实施例中,识别视频帧数据的场景信息是否为夜景场景包括:场景识别模块获取连续预设数量的视频帧图像,获取每个视频帧图像的亮度信息luxIndex,判断连续预设数量的视频帧图像的亮度信息luxIndex是否小于或等于预设亮度,若连续预设数量的视频帧图像的亮度信息luxIndex小于或等于预设亮度,说明当前拍摄场景的光线亮度较暗,确定视频帧数据的场景信息为夜景场景;若任一视频帧图像的亮度信息大于预设亮度,说明当前拍摄场景的光线亮度较亮,确定视频帧数据的场景信息不是夜景场景。可选地,预设数量为5。In an embodiment of the present application, identifying whether the scene information of the video frame data is a night scene scene includes: the scene recognition module acquires a continuous preset number of video frame images, acquires the brightness information luxIndex of each video frame image, and determines the continuous preset Whether the brightness information luxIndex of the number of video frame images is less than or equal to the preset brightness, if the brightness information luxIndex of the continuous preset number of video frame images is less than or equal to the preset brightness, it means that the light brightness of the current shooting scene is relatively dark, determine the video frame The scene information of the data is a night scene; if the brightness information of any video frame image is greater than the preset brightness, it means that the light brightness of the current shooting scene is relatively bright, and it is determined that the scene information of the video frame data is not a night scene. Optionally, the preset number is 5.
S208,确定推荐的视频模式为夜景模式,并基于夜景模式进行视频拍摄。S208. Determine that the recommended video mode is a night scene mode, and perform video shooting based on the night scene mode.
在本申请的一实施例中,在作出推荐决策,确定推荐的视频模式为夜景模式后,在相机应用程序界面显示提示控件,提示控件上显示文字“推荐使用夜景模式”,响应于用户触发提示控件的操作,模式切换模块将当前的视频模式切换为夜景模式,基于夜景模式进行视频拍摄。In one embodiment of the present application, after making a recommendation decision and determining that the recommended video mode is the night scene mode, a prompt control is displayed on the camera application interface, and the text "Night scene mode is recommended" is displayed on the prompt control, in response to the user triggering the prompt For the operation of the controls, the mode switching module switches the current video mode to the night scene mode, and performs video shooting based on the night scene mode.
参阅图11所示,为本申请另一实施例提供的视频拍摄方法的流程图。所述方法应用于电子设备中,所述视频拍摄方法包括:Referring to FIG. 11 , it is a flowchart of a video shooting method provided by another embodiment of the present application. The method is applied in an electronic device, and the video shooting method includes:
S301,获取摄像头拍摄的视频帧数据,识别视频帧数据是否包含人脸。若视频帧数据包含人脸,流程进入S302;若视频帧数据不包含人脸,流程进入S307。S301. Acquire video frame data captured by a camera, and identify whether the video frame data includes a human face. If the video frame data contains a human face, the process enters S302; if the video frame data does not contain a human face, the process enters S307.
S302,计算人脸区域的尺寸在视频帧数据中所占的比例。S302. Calculate the proportion of the size of the face area in the video frame data.
S303,判断人脸区域的尺寸在视频帧数据中所占的比例是否大于或等于第一预设值。若人脸区域的尺寸在视频帧数据中所占的比例大于或等于第一预设值,流程进入S304;若人脸区域的尺寸在视频帧数据中所占的比例小于第一预设值,流程进入S305。S303. Determine whether the proportion of the size of the human face area in the video frame data is greater than or equal to a first preset value. If the proportion of the size of the human face area in the video frame data is greater than or equal to the first preset value, the process enters S304; if the proportion of the size of the human face area in the video frame data is smaller than the first preset value, The flow goes to S305.
S304,确定推荐的视频模式为人像模式,并基于人像模式进行视频拍摄。S304. Determine that the recommended video mode is the portrait mode, and perform video shooting based on the portrait mode.
S305,判断人脸区域的尺寸在视频帧数据中所占的比例是否小于或等于第二预设值,第二预设值小于第一预设值。若人脸区域的尺寸在视频帧数据中所占的比例小于或等于第二预设值,流程进入S306;若人脸区域的尺寸在视频帧数据中所占的比例大于第二预设值,流程进入S307。S305. Determine whether the proportion of the size of the human face area in the video frame data is less than or equal to a second preset value, and the second preset value is smaller than the first preset value. If the proportion of the size of the human face area in the video frame data is less than or equal to the second preset value, the process enters S306; if the proportion of the size of the human face area in the video frame data is greater than the second preset value, The flow goes to S307.
S306,确定推荐的视频模式为主角模式,并基于主角模式进行视频拍摄。S306. Determine that the recommended video mode is the protagonist mode, and perform video shooting based on the protagonist mode.
S307,识别视频帧数据的场景信息是否为夜景场景。若视频帧数据的场景信息为夜景场景,流程进入S308;若视频帧数据的场景信息不是夜景场景,流程进入S309。S307. Identify whether the scene information of the video frame data is a night scene. If the scene information of the video frame data is a night scene, the process goes to S308; if the scene information of the video frame data is not a night scene, the process goes to S309.
S308,确定推荐的视频模式为夜景模式,并基于夜景模式进行视频拍摄。S308. Determine that the recommended video mode is a night scene mode, and perform video shooting based on the night scene mode.
S309,识别视频帧数据的场景信息是否为高动态范围(High Dynamic Range,HDR)场景。若视频帧数据的场景信息为高动态范围场景,流程进入S310;若视频帧数据的场景信息不是高动态范围场景,流程返回S301。S309. Identify whether the scene information of the video frame data is a high dynamic range (High Dynamic Range, HDR) scene. If the scene information of the video frame data is a high dynamic range scene, the process enters S310; if the scene information of the video frame data is not a high dynamic range scene, the process returns to S301.
在本申请的一实施例中,识别视频帧数据的场景信息是否为高动态范围场景包括:场景识别模块获取视频帧数据中连续预设数量的视频帧图像,获取连续预设数量的视频帧图像的动态范围值drValue,判断连续预设数量的视频帧图像的动态范围值drValue是否大于或等于预设动态范围值,若连续预设数量的视频帧图像的动态范围值drValue大于或等于预设动态范围值,确定视频帧数据的场景信息为高动态范围场景;若任一视频帧图像的动态范围值drValue小于预设动态范围值,确定视频帧数据的场景信息不是高动态范围场景。在本申请的一实施例中,动态范围值为图像中最高亮度与最低亮度之间的比值。可选地,预设动态范围值为50。In an embodiment of the present application, identifying whether the scene information of the video frame data is a high dynamic range scene includes: the scene recognition module acquires a preset number of video frame images in the video frame data, and acquires a preset number of video frame images in a row The dynamic range value drValue of the continuous preset number of video frame images determines whether the dynamic range value drValue is greater than or equal to the preset dynamic range value, if the dynamic range value drValue of the continuous preset number of video frame images is greater than or equal to the preset dynamic range value The range value determines that the scene information of the video frame data is a high dynamic range scene; if the dynamic range value drValue of any video frame image is less than the preset dynamic range value, it is determined that the scene information of the video frame data is not a high dynamic range scene. In an embodiment of the present application, the dynamic range value is a ratio between the highest brightness and the lowest brightness in the image. Optionally, the preset dynamic range value is 50.
S310,确定推荐的视频模式为高动态范围模式,并基于高动态范围模式进行视频拍摄。S310. Determine that the recommended video mode is a high dynamic range mode, and perform video shooting based on the high dynamic range mode.
在本申请的一实施例中,在作出推荐决策,确定推荐的视频模式为高动态范围模式后,在相机应用程序界面显示提示控件,提示控件上显示文字“推荐使用HDR模式”,响应于用户触发提示控件的操作,模式切换模块将当前的视频模式切换为高动态范围模式,基于高动态范围模式进行视频拍摄。In one embodiment of the present application, after making a recommendation decision and determining that the recommended video mode is the high dynamic range mode, a prompt control is displayed on the camera application interface, and the text "recommended to use HDR mode" is displayed on the prompt control, in response to the user The operation of the prompt control is triggered, and the mode switching module switches the current video mode to the high dynamic range mode, and performs video shooting based on the high dynamic range mode.
参阅图12-13所示,为本申请另一实施例提供的视频拍摄方法的流程图。所述方法应用于电子设备中,所述视频拍摄方法包括:Refer to FIGS. 12-13 , which are flowcharts of a video shooting method provided by another embodiment of the present application. The method is applied in an electronic device, and the video shooting method includes:
S401,获取摄像头拍摄的视频帧数据,识别视频帧数据是否包含人脸。若视频帧数据包含人脸,流程进入S402;若视频帧数据不包含人脸,流程进入S407。S401. Acquire video frame data captured by a camera, and identify whether the video frame data contains a human face. If the video frame data contains a human face, the process enters S402; if the video frame data does not contain a human face, the process enters S407.
S402,计算人脸区域的尺寸在视频帧数据中所占的比例。S402. Calculate the proportion of the size of the face area in the video frame data.
S403,判断人脸区域的尺寸在视频帧数据中所占的比例是否大于或等于第一预设值。若人脸区域的尺寸在视频帧数据中所占的比例大于或等于第一预设值,流程进入S404;若人脸区域的尺寸在视频帧数据中所占的比例小于第一预设值,流程进入S405。S403. Determine whether the proportion of the size of the human face area in the video frame data is greater than or equal to a first preset value. If the proportion of the size of the human face area in the video frame data is greater than or equal to the first preset value, the process enters S404; if the proportion of the size of the human face area in the video frame data is smaller than the first preset value, The flow goes to S405.
S404,确定推荐的视频模式为人像模式,并基于人像模式进行视频拍摄。S404. Determine that the recommended video mode is the portrait mode, and perform video shooting based on the portrait mode.
S405,判断人脸区域的尺寸在视频帧数据中所占的比例是否小于或等于第二预设值,第二预设值小于第一预设值。若人脸区域的尺寸在视频帧数据中所占的比例小于或等于第二预设值,流程进入S406;若人脸区域的尺寸在视频帧数据中所占的比例大于第二预设值,流程进入S407。S405. Determine whether the proportion of the size of the human face area in the video frame data is less than or equal to a second preset value, and the second preset value is smaller than the first preset value. If the proportion of the size of the human face area in the video frame data is less than or equal to the second preset value, the process enters S406; if the proportion of the size of the human face area in the video frame data is greater than the second preset value, The flow goes to S407.
S406,确定推荐的视频模式为主角模式,并基于主角模式进行视频拍摄。S406. Determine that the recommended video mode is the protagonist mode, and perform video shooting based on the protagonist mode.
S407,识别视频帧数据的场景信息是否为夜景场景。若视频帧数据的场景信息为夜景场景,流程进入S408;若视频帧数据的场景信息不是夜景场景,流程进入S409。S407. Identify whether the scene information of the video frame data is a night scene. If the scene information of the video frame data is a night scene, the process goes to S408; if the scene information of the video frame data is not a night scene, the process goes to S409.
S408,确定推荐的视频模式为夜景模式,并基于夜景模式进行视频拍摄。S408. Determine that the recommended video mode is a night scene mode, and perform video shooting based on the night scene mode.
S409,识别视频帧数据的场景信息是否为高动态范围场景。若视频帧数据的场景信息为高动态范围场景,流程进入S410;若视频帧数据的场景信息不是高动态范围场景,流程进入S411。S409. Identify whether the scene information of the video frame data is a high dynamic range scene. If the scene information of the video frame data is a high dynamic range scene, the process enters S410; if the scene information of the video frame data is not a high dynamic range scene, the process enters S411.
S410,确定推荐的视频模式为高动态范围模式,并基于高动态范围模式进行视频拍摄。S410. Determine that the recommended video mode is a high dynamic range mode, and perform video shooting based on the high dynamic range mode.
S411,识别视频帧数据的场景信息是否为微距场景。若预览流的场景信息为微距场景,流程进入S412;若视频帧数据的场景信息不是微距场景,流程返回S401。S411. Identify whether the scene information of the video frame data is a macro scene. If the scene information of the preview stream is a macro scene, the process enters S412; if the scene information of the video frame data is not a macro scene, the process returns to S401.
在本申请的一实施例中,识别视频帧数据的场景信息是否为微距场景包括:场景识别模块基于摄像头音圈马达的移动数据vcmCode、摄像头的对焦状态focusStatus及摄像头的校正数据calibData判断视频帧数据的场景信息是否为微距场景。具体地,场景识别模块获取摄像头音圈马达的移动数据vcmCode、摄像头的对焦状态focusStatus及摄像头的校正数据calibData,判断摄像头的对焦状态focusStatus是否为成功,若摄像头的对焦状态focusStatus为不成功,判断摄像头音圈马达的移动数据vcmCode是否大于或等于预设移动数据,若摄像头音圈马达的移动数据vcmCode大于或等于预设移动数据,判断摄像头的校正数据calibData是否正常,若摄像头的校正数据calibData正常,在摄像头对焦失败的情况下,摄像头的移动距离也够多,且摄像头的校正数据正常,说明摄像头可能由于对焦距离大小而无法对焦,确定视频帧数据的场景信息为微距场景;若摄像头的对焦状态focusStatus为成功,或摄像头音圈马达的移动数据vcmCode小于预设移动数据,或摄像头的校正数据calibData异常,确定视频帧数据的场景信息不是微距场景。可选地,预设移动数据为10个音圈马达的移动步长。In an embodiment of the present application, identifying whether the scene information of the video frame data is a macro scene includes: the scene identification module judges the video frame based on the movement data vcmCode of the voice coil motor of the camera, the focus status of the camera focusStatus and the calibration data calibData of the camera Whether the scene information of the data is a macro scene. Specifically, the scene recognition module obtains the moving data vcmCode of the camera voice coil motor, the focus status of the camera focusStatus, and the calibration data calibData of the camera, and judges whether the focus status of the camera is successful. Whether the movement data vcmCode of the voice coil motor is greater than or equal to the preset movement data. If the movement data vcmCode of the voice coil motor of the camera is greater than or equal to the preset movement data, determine whether the calibration data calibData of the camera is normal. If the calibration data calibData of the camera is normal, If the camera fails to focus, the moving distance of the camera is large enough, and the calibration data of the camera is normal, indicating that the camera may not be able to focus due to the size of the focusing distance. It is determined that the scene information of the video frame data is a macro scene; if the camera’s focus If the status focusStatus is successful, or the movement data vcmCode of the voice coil motor of the camera is less than the preset movement data, or the calibration data calibData of the camera is abnormal, it is determined that the scene information of the video frame data is not a macro scene. Optionally, the preset movement data is 10 movement steps of the voice coil motor.
S412,确定推荐的视频模式为微距模式,并基于微距模式进行视频拍摄。S412. Determine that the recommended video mode is a macro mode, and perform video shooting based on the macro mode.
在本申请的一实施例中,在作出推荐决策,确定推荐的视频模式为微距模式后,在相机应用程序界面显示提示控件,提示控件上显示文字“推荐使用微距模式”,响应于用户触发提示控件的操作,模式切换模块将当前的视频模式切换为微距模式,基于微距模式进行视频拍摄。In one embodiment of the present application, after making a recommendation decision and determining that the recommended video mode is the macro mode, a prompt control is displayed on the camera application interface, and the text "recommended to use macro mode" is displayed on the prompt control, in response to the user The operation of the prompt control is triggered, and the mode switching module switches the current video mode to the macro mode, and video shooting is performed based on the macro mode.
参阅图14-15所示,为本申请另一实施例提供的视频拍摄方法的流程图。所述方法应用于电子设备中,所述视频拍摄方法包括:Refer to FIGS. 14-15 , which are flowcharts of a video shooting method provided by another embodiment of the present application. The method is applied in an electronic device, and the video shooting method includes:
S501,获取摄像头拍摄的视频帧数据,识别视频帧数据是否包含人脸。若视频帧数据包含人脸,流程进入S502;若视频帧数据不包含人脸,流程进入S506。S501. Acquire video frame data captured by a camera, and identify whether the video frame data includes a human face. If the video frame data contains a human face, the process enters S502; if the video frame data does not contain a human face, the process enters S506.
S502,计算人脸区域的尺寸在视频帧数据中所占的比例。S502. Calculate the proportion of the size of the face area in the video frame data.
S503,判断人脸区域的尺寸在视频帧数据中所占的比例是否大于或等于第一预设值。若人脸区域的尺寸在视频帧数据中所占的比例大于或等于第一预设值,流程进入S504;若人脸区域的尺寸在视频帧数据中所占的比例小于第一预设值,流程进入S505。S503. Determine whether the proportion of the size of the human face area in the video frame data is greater than or equal to a first preset value. If the proportion of the size of the human face area in the video frame data is greater than or equal to the first preset value, the process enters S504; if the proportion of the size of the human face area in the video frame data is smaller than the first preset value, The flow goes to S505.
S504,确定推荐的视频模式为人像模式,并基于人像模式进行视频拍摄。S504. Determine that the recommended video mode is the portrait mode, and perform video shooting based on the portrait mode.
S505,判断人脸区域的尺寸在视频帧数据中所占的比例是否小于或等于第二预设值,第二预设值小于第一预设值。若人脸区域的尺寸在视频帧数据中所占的比例小于或等于第二预设值,流程进入S506;若人脸区域的尺寸在视频帧数据中所占的比例大于第二预设值,流程进入S507。S505. Determine whether the proportion of the size of the face area in the video frame data is less than or equal to a second preset value, and the second preset value is smaller than the first preset value. If the proportion of the size of the human face area in the video frame data is less than or equal to the second preset value, the process enters S506; if the proportion of the size of the human face area in the video frame data is greater than the second preset value, The flow goes to S507.
S506,确定推荐的视频模式为主角模式,并基于主角模式进行视频拍摄。S506. Determine that the recommended video mode is the protagonist mode, and perform video shooting based on the protagonist mode.
S507,识别视频帧数据的场景信息是否为夜景场景。若视频帧数据的场景信息为夜景场景,流程进入S508;若视频帧数据的场景信息不是夜景场景,流程进入S509。S507. Identify whether the scene information of the video frame data is a night scene. If the scene information of the video frame data is a night scene, the process goes to S508; if the scene information of the video frame data is not a night scene, the process goes to S509.
S508,确定推荐的视频模式为夜景模式,并基于夜景模式进行视频拍摄。S508. Determine that the recommended video mode is a night scene mode, and perform video shooting based on the night scene mode.
S509,识别视频帧数据的场景信息是否为高动态范围场景。若视频帧数据的场景信息为高动态范围场景,流程进入S510;若预览流的场景信息不是高动态范围场景,流程进入S511。S509. Identify whether the scene information of the video frame data is a high dynamic range scene. If the scene information of the video frame data is a high dynamic range scene, the process enters S510; if the scene information of the preview stream is not a high dynamic range scene, the process enters S511.
S510,确定推荐的视频模式为高动态范围模式,并基于高动态范围模式进行视频拍摄。S510. Determine that the recommended video mode is a high dynamic range mode, and perform video shooting based on the high dynamic range mode.
S511,识别视频帧数据的场景信息是否为微距场景。若视频帧数据的场景信息为微距场景,流程进入S512;若视频帧数据的场景信息不是微距场景,流程进入S513。S511. Identify whether the scene information of the video frame data is a macro scene. If the scene information of the video frame data is a macro scene, the process goes to S512; if the scene information of the video frame data is not a macro scene, the process goes to S513.
在本申请的一实施例中,识别视频帧数据的场景信息是否为微距场景包括:场景识别模块基于摄像头音圈马达的移动数据vcmCode、摄像头的对焦状态focusStatus及摄像头的校正数据calibData判断视频帧数据的场景信息是否为微距场景。具体地,场景识别模块获取摄像头音圈马达的移动数据vcmCode、摄像头的对焦状态focusStatus及摄像头的校正数据calibData,判断摄像头的对焦状态focusStatus是否为成功,若摄像头的对焦状态focusStatus为不成功,判断摄像头音圈马达的移动数据vcmCode是否大于或等于预设移动数据,若摄像头音圈马达的移动数据vcmCode大于或等于预设移动数据,判断摄像头的校正数据calibData是否正常,若摄像头的校正数据calibData正常,在摄像头对焦失败的情况下,摄像头的移动距离也够多,且摄像头的校正数据正常,说明摄像头可能由于对焦距离大小而无法对焦,确定视频帧数据的场景信息为微距场景;若摄像头的对焦状态focusStatus为成功,或摄像头音圈马达的移动数据vcmCode小于预设移动数据,或摄像头的校正数据calibData异常,确定视频帧数据的场景信息不是微距场景。可选地,预设移动数据为10个音圈马达的移动步长。In an embodiment of the present application, identifying whether the scene information of the video frame data is a macro scene includes: the scene identification module judges the video frame based on the movement data vcmCode of the voice coil motor of the camera, the focus status of the camera focusStatus and the calibration data calibData of the camera Whether the scene information of the data is a macro scene. Specifically, the scene recognition module obtains the moving data vcmCode of the camera voice coil motor, the focus status of the camera focusStatus, and the calibration data calibData of the camera, and judges whether the focus status of the camera is successful. Whether the movement data vcmCode of the voice coil motor is greater than or equal to the preset movement data. If the movement data vcmCode of the voice coil motor of the camera is greater than or equal to the preset movement data, determine whether the calibration data calibData of the camera is normal. If the calibration data calibData of the camera is normal, If the camera fails to focus, the moving distance of the camera is large enough, and the calibration data of the camera is normal, indicating that the camera may not be able to focus due to the size of the focusing distance. It is determined that the scene information of the video frame data is a macro scene; if the camera’s focus If the status focusStatus is successful, or the movement data vcmCode of the voice coil motor of the camera is less than the preset movement data, or the calibration data calibData of the camera is abnormal, it is determined that the scene information of the video frame data is not a macro scene. Optionally, the preset movement data is 10 movement steps of the voice coil motor.
S512,确定推荐的视频模式为微距模式,并基于微距模式进行视频拍摄。S512. Determine that the recommended video mode is a macro mode, and perform video shooting based on the macro mode.
S513,识别视频帧数据的场景信息是否为多镜场景。若视频帧数据的场景信息为多镜场景,流程进入S514;若视频帧数据的场景信息不是多镜场景,流程返回S501。S513. Identify whether the scene information of the video frame data is a multi-camera scene. If the scene information of the video frame data is a multi-camera scene, the process proceeds to S514; if the scene information of the video frame data is not a multi-camera scene, the process returns to S501.
在本申请的一实施例中,识别视频帧数据的场景信息是否为微距场景包括:场景识别模块获取视频帧数据中连续预设数量的视频帧图像,识别连续预设数量的视频帧图像中是否包含宠物,若连续预设数量的视频帧图像中包含宠物,确定视频帧数据的场景信息为多镜场景;若任一视频帧图像中不包含宠物,确定视频帧数据的场景信息不是多镜场景。即,场景识别模块基于场景分析结果aiScencDetResult为连续预设数量的视频帧图像中包含宠物,确定视频帧数据的场景信息为多镜场景。In an embodiment of the present application, identifying whether the scene information of the video frame data is a macro scene includes: the scene recognition module acquires a preset number of video frame images in the video frame data, and identifies the video frame images in the preset number of video frame images Whether pets are included, if pets are included in a predetermined number of consecutive video frame images, it is determined that the scene information of the video frame data is a multi-lens scene; if any video frame image does not contain pets, it is determined that the scene information of the video frame data is not a multi-lens scene Scenes. That is, the scene recognition module determines that the scene information of the video frame data is a multi-lens scene based on the scene analysis result aiScencDetResult that a predetermined number of consecutive video frame images contain pets.
S514,确定推荐的视频模式为多镜模式,并基于多镜模式进行视频拍摄。S514. Determine that the recommended video mode is a multi-camera mode, and perform video shooting based on the multi-camera mode.
在本申请的一实施例中,在作出推荐决策,确定推荐的视频模式为多镜模式后,在相机应用程序界面显示提示控件,提示控件上显示文字“推荐使用多镜模式”,响应于用户触发提示控件的操作,模式切换模块将当前的视频模式切换为多镜模式,基于多镜模式进行视频拍摄。In one embodiment of the present application, after making a recommendation decision and determining that the recommended video mode is the multi-camera mode, a prompt control is displayed on the camera application interface, and the text "recommended to use multi-camera mode" is displayed on the prompt control, in response to user The operation of the prompt control is triggered, and the mode switching module switches the current video mode to the multi-lens mode, and video shooting is performed based on the multi-lens mode.
本申请实施例提供一种在普通录像模式支持智能场景检测并进行视频模式推荐的整体方案,可以根据当前已经支持的视频模式,识别出典型的用户场景,即,在普通录像模式,开启典型场景检测逻辑,检测到典型场景后把对应的最佳视频模式推荐给用户。The embodiment of the present application provides an overall solution for supporting intelligent scene detection and video mode recommendation in normal recording mode, which can identify typical user scenes according to currently supported video modes, that is, open typical scenes in normal recording mode Detection logic, after detecting a typical scene, recommend the corresponding best video mode to the user.
当前支持的视频模式有HDR模式、人像模式、主角模式、夜景模式、微距模式,多镜模式。具体方案包括:进入普通录像预览界面,默认开启Master AI(智能拍摄助手),进行智能场景检测,根据场景检测结果,弹出推荐对话框,用户点击进行模式选择之后会进入对应的录像模式。进入选择的录像模式之后,不再支持场景检测,退出当前模式的方法:手动叉掉当前模式的图标或者点击录像,完成录制之后会自动返回到普通录像模式,并启动新一轮场景检测。The currently supported video modes are HDR mode, portrait mode, protagonist mode, night scene mode, macro mode, and multi-lens mode. The specific solutions include: enter the normal video preview interface, enable Master AI (intelligent shooting assistant) by default, and perform intelligent scene detection. According to the scene detection results, a recommendation dialog box will pop up. After the user clicks to select the mode, it will enter the corresponding video mode. After entering the selected recording mode, scene detection is no longer supported. The method of exiting the current mode: Manually cross off the icon of the current mode or click the recording. After the recording is completed, it will automatically return to the normal recording mode and start a new round of scene detection.
参阅图16所示,为本申请另一实施例提供的视频拍摄方法的流程图。Referring to FIG. 16 , it is a flowchart of a video shooting method provided by another embodiment of the present application.
S601,进入普通录像模式。S601, enter the normal video recording mode.
S602,判断Master AI是否开启。若Master AI未开启,流程进入S603;若Master AI已开启,流程进入S604。S602. Determine whether the Master AI is enabled. If the Master AI is not enabled, the process enters S603; if the Master AI is enabled, the process enters S604.
S603,保持视频模式为普通录像模式。S603. Keep the video mode as a normal video recording mode.
S604,Master AI进行拍摄场景识别。S604, Master AI recognizes the shooting scene.
S605,判断是否有匹配场景。若有匹配场景,流程进入S606;若没有匹配场景,流程进入S603。S605. Determine whether there is a matching scene. If there is a matching scene, the process goes to S606; if there is no matching scene, the process goes to S603.
S606,基于识别到的匹配场景进行视频模式的决策推荐。其中,决策推荐流程具体包括:S607,检测到HDR场景,且稳定一定帧数。S608,将视频模式切换为HDR模式。S609,检测到夜景场景,且稳定一定帧数。S610,将视频模式切换为夜景模式。S611,检测到单人,人脸比例大于或等于1/3,且稳定一定帧数。S612,将视频模式切换为人像模式,开启虚化功能。S613,检测到多人,最大人脸比例小于或等于1/5,且稳定一定帧数。S614,将视频模式切换为主角模式。S615,检测到宠物,且稳定一定帧数。S616,将视频模式切换为多镜模式。S617,检测到微距场景,且稳定一定帧数。S618,将视频模式切换为微距模式。手动关闭当前视频模式时,S619,进入普通录像模式。S606. Perform video mode decision recommendation based on the identified matching scene. Wherein, the decision-making recommendation process specifically includes: S607, detecting an HDR scene and stabilizing a certain number of frames. S608, switch the video mode to the HDR mode. S609, detecting a night scene, and stabilizing for a certain number of frames. S610, switch the video mode to night scene mode. S611, a single person is detected, the face ratio is greater than or equal to 1/3, and the number of frames is stable. S612, switch the video mode to the portrait mode, and enable the blur function. S613, multiple people are detected, the maximum face ratio is less than or equal to 1/5, and a certain number of frames is stabilized. S614. Switch the video mode to the protagonist mode. S615. A pet is detected and stabilized for a certain number of frames. S616, switch the video mode to a multi-mirror mode. S617, detecting a macro scene, and stabilizing for a certain number of frames. S618, switching the video mode to a macro mode. When manually closing the current video mode, the S619 enters the normal video mode.
参阅图17所示,为Master AI进行智能场景检测的决策因子,参阅图18所示,为各个视频模式的视频规格。Refer to Figure 17 for the decision factors for intelligent scene detection for Master AI, and see Figure 18 for the video specifications of each video mode.
参阅图19所示,为本申请另一实施例提供的视频拍摄方法的流程图。Referring to FIG. 19 , it is a flowchart of a video shooting method provided by another embodiment of the present application.
S701,输入视频帧数据。S701. Input video frame data.
S702,根据决策因子输出待推荐的视频模式。S702. Output a video mode to be recommended according to the decision factor.
S703,判断当前摄像头是否满足待推荐的视频模式要求的变焦能力。若当前摄像头满足待推荐的视频模式要求的变焦能力,流程进入S704;若当前摄像头不满足待推荐的视频模式要求的变焦能力,流程返回S701。S703, judging whether the current camera meets the zoom capability required by the video mode to be recommended. If the current camera meets the zoom capability required by the video mode to be recommended, the process enters S704; if the current camera does not meet the zoom capability required by the video mode to be recommended, the process returns to S701.
在本申请的一实施例中,若当前摄像头的变焦倍率范围满足待推荐的视频模式要求的缩放规格,确定当前摄像头满足待推荐的视频模式要求的变焦能力;若当前摄像头的变焦倍率范围不满足待推荐的视频模式要求的缩放规格,确定当前摄像头不满足待推荐的视频模式要求的变焦能力。例如,如图18所示,人像模式的缩放规格为1x-2x(1倍-2倍),当前摄像头的变焦倍率范围为1x-4x,则确定当前摄像头的变焦倍率范围满足人像模式要求的缩放规格。In an embodiment of the present application, if the zoom ratio range of the current camera meets the zoom specification required by the video mode to be recommended, it is determined that the current camera meets the zoom capability required by the video mode to be recommended; The zoom specification required by the video mode to be recommended determines that the current camera does not meet the zoom capability required by the video mode to be recommended. For example, as shown in Figure 18, the zoom specification of the portrait mode is 1x-2x (1x-2x), and the zoom ratio range of the current camera is 1x-4x, then it is determined that the zoom ratio range of the current camera meets the zoom ratio required by the portrait mode Specification.
S704,决策推荐的视频模式。即,将输出的视频模式决策为推荐的视频模式。S704, decide a recommended video mode. That is, the output video mode is decided as the recommended video mode.
此方案跟拍照智能场景识别的主要差异点:1)拍照MasterAI只要识别到人脸,并满足场景优先级条件,即进入人像虚化;视频MasterAI中,会根据人脸大小的不同进入不同模式,大于1/3的人脸场景进入人像虚化模式,小于1/5的人脸场景会进入主角模式;2)实现方式有差异:拍照MasterAI进入人像虚化,只是在普通拍照链路增加虚化和美颜算法;视频MasterAI中会进行模式跳转,跳转进人像模式(视频虚化)和主角模式中。The main difference between this solution and intelligent scene recognition for taking photos: 1) As long as the face is recognized by MasterAI for taking photos and the priority conditions of the scene are met, it will enter the portrait blur; in MasterAI for video, it will enter different modes according to the size of the face, More than 1/3 of the face scenes will enter the portrait blur mode, and less than 1/5 of the face scenes will enter the protagonist mode; 2) There are differences in the implementation methods: MasterAI enters the portrait blur when taking pictures, and only adds blur to the normal camera link And beautification algorithm; Video MasterAI will perform a mode jump, jumping into portrait mode (video blur) and protagonist mode.
本申请实施例还提供一种电子设备100,参阅图20所示,所述电子设备100可以是手机、平板电脑、桌面型计算机、膝上型计算机、手持计算机、笔记本电脑、超级移动个人计算机(Ultra-mobile Personal Computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(Personal Digital Assistant,PDA)、增强现实(Augmented Reality,AR)设备、虚拟现实(Virtual Reality,VR)设备、人工智能(Artificial Intelligence, AI)设备、可穿戴式设备、车载设备、智能家居设备和/或智慧城市设备,本申请实施例对该电子设备100的具体类型不作特殊限制。The embodiment of the present application also provides an electronic device 100, as shown in FIG. Ultra-mobile Personal Computer, UMPC), netbook, and cell phone, personal digital assistant (Personal Digital Assistant, PDA), augmented reality (Augmented Reality, AR) device, virtual reality (Virtual Reality, VR) device, artificial intelligence (Artificial Intelligence, AI) devices, wearable devices, vehicle-mounted devices, smart home devices and/or smart city devices, the embodiment of the present application does not specifically limit the specific type of the electronic device 100 .
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(Universal Serial Bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(Subscriber Identification Module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (Universal Serial Bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2 , mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, and A subscriber identification module (Subscriber Identification Module, SIM) card interface 195 and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, bone conduction sensor 180M, etc.
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that, the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100 . In other embodiments of the present application, the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components. The illustrated components can be realized in hardware, software or a combination of software and hardware.
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(Application Processor,AP),调制解调处理器,图形处理器(Graphics ProcessingUnit,GPU),图像信号处理器(Image Signal Processor,ISP),控制器,视频编解码器,数字信号处理器(Digital Signal Processor,DSP),基带处理器,和/或神经网络处理器(Neural-network Processing Unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。The processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (Application Processor, AP), a modem processor, a graphics processor (Graphics Processing Unit, GPU), an image signal processor ( Image Signal Processor, ISP), controller, video codec, digital signal processor (Digital Signal Processor, DSP), baseband processor, and/or neural network processor (Neural-network Processing Unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。The controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(Inter-integrated Circuit,I2C)接口,集成电路内置音频(Inter-integrated CircuitSound,I2S)接口,脉冲编码调制(Pulse Code Modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(Mobile Industry Processor Interface,MIPI),通用输入输出(General-PurposeInput/Output,GPIO)接口,用户标识模块(Subscriber Identity Module,SIM)接口,和/或通用串行总线(Universal Serial Bus,USB)接口等。In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (Inter-integrated Circuit, I2C) interface, an integrated circuit built-in audio (Inter-integrated CircuitSound, I2S) interface, a pulse code modulation (Pulse Code Modulation, PCM) interface, a universal asynchronous receiver (universal asynchronous receiver) /transmitter, UART) interface, Mobile Industry Processor Interface (MIPI), General-Purpose Input/Output (GPIO) interface, Subscriber Identity Module (Subscriber Identity Module, SIM) interface, and/or Universal Serial Bus (Universal Serial Bus, USB) interface, etc.
I2C接口是一种双向同步串行总线,包括一根串行数据线(Serial Data Line,SDA)和一根串行时钟线(Derail Clock Line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。The I2C interface is a bidirectional synchronous serial bus, including a serial data line (Serial Data Line, SDA) and a serial clock line (Derail Clock Line, SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flashlight, the camera 193 and the like through different I2C bus interfaces. For example, the processor 110 may be coupled to the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to realize the touch function of the electronic device 100 .
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。The I2S interface can be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 . In some embodiments, the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through the Bluetooth headset.
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。The PCM interface can also be used for audio communication, sampling, quantizing and encoding the analog signal. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。The UART interface is a universal serial data bus used for asynchronous communication. The bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 and the wireless communication module 160 . For example: the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to realize the Bluetooth function. In some embodiments, the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(Camera Serial Interface,CSI),显示屏串行接口(DisplaySerial Interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。The MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 . The MIPI interface includes a camera serial interface (Camera Serial Interface, CSI), a display serial interface (DisplaySerial Interface, DSI), etc. In some embodiments, the processor 110 communicates with the camera 193 through the CSI interface to realize the shooting function of the electronic device 100 . The processor 110 communicates with the display screen 194 through the DSI interface to realize the display function of the electronic device 100 .
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。The GPIO interface can be configured by software. The GPIO interface can be configured as a control signal or as a data signal. In some embodiments, the GPIO interface can be used to connect the processor 110 with the camera 193 , the display screen 194 , the wireless communication module 160 , the audio module 170 , the sensor module 180 and so on. The GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备100,例如AR设备等。The USB interface 130 is an interface conforming to the USB standard specification, specifically, it may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like. The USB interface 130 can be used to connect a charger to charge the electronic device 100 , and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices 100, such as AR devices.
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。It can be understood that the interface connection relationship between the modules shown in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 . In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备100供电。The charging management module 140 is configured to receive a charging input from a charger. Wherein, the charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 can receive charging input from the wired charger through the USB interface 130 . In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 is charging the battery 142 , it can also supply power to the electronic device 100 through the power management module 141 .
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。The power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 . The power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the display screen 194 , the camera 193 , and the wireless communication module 160 . The power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance). In some other embodiments, the power management module 141 may also be disposed in the processor 110 . In some other embodiments, the power management module 141 and the charging management module 140 may also be set in the same device.
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。The wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals. Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(Low Noise Amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。The mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 100 . The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (Low Noise Amplifier, LNA) and the like. The mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation. The mobile communication module 150 can also amplify the signals modulated by the modem processor, and convert them into electromagnetic waves and radiate them through the antenna 1 . In some embodiments, at least part of the functional modules of the mobile communication module 150 may be set in the processor 110 . In some embodiments, at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。A modem processor may include a modulator and a demodulator. Wherein, the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing. The low-frequency baseband signal is passed to the application processor after being processed by the baseband processor. The application processor outputs sound signals through audio equipment (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194 . In some embodiments, the modem processor may be a stand-alone device. In some other embodiments, the modem processor may be independent from the processor 110, and be set in the same device as the mobile communication module 150 or other functional modules.
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(WirelessLocal Area Networks,WLAN)(如无线保真(Wireless Fidelity,Wi-Fi)网络),蓝牙(Bluetooth,BT),全球导航卫星系统(Global Navigation Satellite System,GNSS),调频(Frequency Modulation,FM),近距离无线通信技术(Near Field Communication,NFC),红外技术(Infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。The wireless communication module 160 can provide wireless local area network (WirelessLocal Area Networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (Bluetooth, BT), global navigation satellite system, etc. applied on the electronic device 100. (Global Navigation Satellite System, GNSS), frequency modulation (Frequency Modulation, FM), near field communication technology (Near Field Communication, NFC), infrared technology (Infrared, IR) and other wireless communication solutions. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 . The wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(Global System For Mobile Communications,GSM),通用分组无线服务(General Packet Radio Service,GPRS),码分多址接入(CodeDivision Multiple Access,CDMA),宽带码分多址(Wideband Code Division MultipleAccess,WCDMA),时分码分多址(Time-Division Code Division Multiple Access,TD-SCDMA),长期演进(Long Term Evolution,LTE),BT,GNSS,WLAN,NFC ,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(Global Positioning System,GPS),全球导航卫星系统(Global Navigation Satellite System,GLONASS),北斗卫星导航系统(BeidouNavigation Satellite System,BDS),准天顶卫星系统(Quasi-Zenith SatelliteSystem,QZSS)和/或星基增强系统(Satellite Based Augmentation Systems,SBAS)。In some embodiments, the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology. Described wireless communication technology can comprise Global System For Mobile Communications (Global System For Mobile Communications, GSM), General Packet Radio Service (General Packet Radio Service, GPRS), Code Division Multiple Access (CodeDivision Multiple Access, CDMA), broadband code Wideband Code Division Multiple Access (WCDMA), Time-Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC, FM , and/or IR technology, etc. The GNSS can include Global Positioning System (Global Positioning System, GPS), Global Navigation Satellite System (Global Navigation Satellite System, GLONASS), Beidou Navigation Satellite System (Beidou Navigation Satellite System, BDS), Quasi-Zenith Satellite System (Quasi- Zenith Satellite System, QZSS) and/or Satellite Based Augmentation Systems (Satellite Based Augmentation Systems, SBAS).
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。The electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(Liquid Crystal Display,LCD),有机发光二极管(Organic Light-EmittingDiode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(Active-MatrixOrganic Light Emitting Diode的,AMOLED),柔性发光二极管(Flex Light-EmittingDiode,FLED),Miniled,Microled,Micro-OLED,量子点发光二极管(Quantum Dot LightEmitting Diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。The display screen 194 is used to display images, videos and the like. The display screen 194 includes a display panel. The display panel can be a liquid crystal display (Liquid Crystal Display, LCD), an organic light-emitting diode (Organic Light-Emitting Diode, OLED), an active matrix organic light-emitting diode or an active matrix organic light-emitting diode (Active-MatrixOrganic Light-Emitting Diode). , AMOLED), flexible light-emitting diodes (Flex Light-EmittingDiode, FLED), Miniled, Microled, Micro-OLED, quantum dot light-emitting diodes (Quantum Dot LightEmitting Diodes, QLED), etc. In some embodiments, the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。The electronic device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display screen 194 and the application processor.
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。The ISP is used for processing the data fed back by the camera 193 . For example, when taking a picture, open the shutter, the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye. ISP can also perform algorithm optimization on image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be located in the camera 193 .
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(Charge Coupled Device,CCD)或互补金属氧化物半导体(Complementary Metal-Oxide-Semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。Camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects it to the photosensitive element. The photosensitive element may be a Charge Coupled Device (Charge Coupled Device, CCD) or a Complementary Metal-Oxide-Semiconductor (Complementary Metal-Oxide-Semiconductor, CMOS) phototransistor. The photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. DSP converts digital image signals into standard RGB, YUV and other image signals. In some embodiments, the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(Moving Picture Experts Group,MPEG)1,MPEG2,MPEG3,MPEG4等。Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in various encoding formats, such as: Moving Picture Experts Group (Moving Picture Experts Group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
NPU为神经网络(Neural-Network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。NPU is a neural network (Neural-Network, NN) computing processor. By referring to the structure of biological neural networks, such as the transmission mode between neurons in the human brain, it can quickly process input information and continuously learn by itself. Applications such as intelligent cognition of the electronic device 100 can be realized through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
内部存储器121可以包括一个或多个随机存取存储器(Random Access Memory,RAM)和一个或多个非易失性存储器(Non-Volatile Memory,NVM)。The internal memory 121 may include one or more random access memories (Random Access Memory, RAM) and one or more non-volatile memories (Non-Volatile Memory, NVM).
随机存取存储器可以包括静态随机存储器(Static Random-Access Memory,SRAM)、动态随机存储器(Dynamic Random Access Memory,DRAM)、同步动态随机存储器(Synchronous Dynamic Random Access Memory, SDRAM)、双倍资料率同步动态随机存取存储器(Double Data Rate Synchronous Dynamic Random Access Memory, DDR SDRAM,例如第五代DDR SDRAM一般称为DDR5 SDRAM)等;Random access memory can include Static Random-Access Memory (SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (Synchronous Dynamic Random Access Memory, SDRAM), double data rate synchronous Dynamic random access memory (Double Data Rate Synchronous Dynamic Random Access Memory, DDR SDRAM, such as the fifth generation DDR SDRAM is generally called DDR5 SDRAM), etc.;
非易失性存储器可以包括磁盘存储器件、快闪存储器(flash memory)。Non-volatile memory may include magnetic disk storage devices, flash memory (flash memory).
快闪存储器按照运作原理划分可以包括NOR FLASH、NAND FLASH、3D NAND FLASH等,按照存储单元电位阶数划分可以包括单阶存储单元(Single-Level Cell, SLC)、多阶存储单元(Multi-Level Cell, MLC)、三阶储存单元(Triple-Level Cell, TLC)、四阶储存单元(Quad-Level Cell,QLC)等,按照存储规范划分可以包括通用闪存存储(UniversalFlash Storage,UFS)、嵌入式多媒体存储卡(embedded Multi Media Card,eMMC)等。According to the operating principle, flash memory can include NOR FLASH, NAND FLASH, 3D NAND FLASH, etc. According to the potential order of storage cells, it can include single-level storage cells (Single-Level Cell, SLC), multi-level storage cells (Multi-Level Cell, MLC), triple-level storage unit (Triple-Level Cell, TLC), fourth-level storage unit (Quad-Level Cell, QLC), etc., which can include Universal Flash Storage (UFS), embedded Multimedia memory card (embedded Multi Media Card, eMMC), etc.
随机存取存储器可以由处理器110直接进行读写,可以用于存储操作系统或其他正在运行中的程序的可执行程序(例如机器指令),还可以用于存储用户及应用程序的数据等。The random access memory can be directly read and written by the processor 110 , can be used to store executable programs (such as machine instructions) of an operating system or other running programs, and can also be used to store data of users and application programs.
非易失性存储器也可以存储可执行程序和存储用户及应用程序的数据等,可以提前加载到随机存取存储器中,用于处理器110直接进行读写。The non-volatile memory can also store executable programs and data of users and application programs, etc., and can be loaded into the random access memory in advance for the processor 110 to directly read and write.
外部存储器接口120可以用于连接外部的非易失性存储器,实现扩展电子设备100的存储能力。外部的非易失性存储器通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部的非易失性存储器中。The external memory interface 120 can be used to connect an external non-volatile memory, so as to expand the storage capacity of the electronic device 100 . The external non-volatile memory communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music and video are stored in an external non-volatile memory.
内部存储器121或外部存储器接口120用于存储一个或多个计算机程序。一个或多个计算机程序被配置为被该处理器110执行。该一个或多个计算机程序包括多个指令,多个指令被处理器110执行时,可实现上述实施例中在电子设备100上执行的屏幕显示检测方法,以实现电子设备100的屏幕显示检测功能。The internal memory 121 or the external memory interface 120 is used to store one or more computer programs. One or more computer programs are configured to be executed by the processor 110 . The one or more computer programs include a plurality of instructions, and when the plurality of instructions are executed by the processor 110, the screen display detection method executed on the electronic device 100 in the above embodiment can be implemented, so as to realize the screen display detection function of the electronic device 100 .
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。The electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。The audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。Speaker 170A, also referred to as a "horn", is used to convert audio electrical signals into sound signals. Electronic device 100 can listen to music through speaker 170A, or listen to hands-free calls.
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。Receiver 170B, also called "earpiece", is used to convert audio electrical signals into sound signals. When the electronic device 100 receives a call or a voice message, the receiver 170B can be placed close to the human ear to receive the voice.
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。The microphone 170C, also called "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a phone call or sending a voice message, the user can put his mouth close to the microphone 170C to make a sound, and input the sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In some other embodiments, the electronic device 100 may be provided with two microphones 170C, which may also implement a noise reduction function in addition to collecting sound signals. In some other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions, etc.
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备100平台(Open Mobile Terminal Platform,OMTP)标准接口,美国蜂窝电信工业协会(Cellular Telecommunications Industry Association of theUSA,CTIA)标准接口。The earphone interface 170D is used for connecting wired earphones. The earphone interface 170D may be a USB interface 130, or a 3.5mm Open Mobile Terminal Platform (OMTP) standard interface, or a Cellular Telecommunications Industry Association of the USA (CTIA) standard interface.
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。The keys 190 include a power key, a volume key and the like. The key 190 may be a mechanical key. It can also be a touch button. The electronic device 100 may receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。The motor 191 can generate a vibrating reminder. The motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback. For example, touch operations applied to different applications (such as taking pictures, playing audio, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects for touch operations acting on different areas of the display screen 194 . Different application scenarios (for example: time reminder, receiving information, alarm clock, games, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect can also support customization.
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。The indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。本申请实施例还提供一种计算机存储介质,该计算机存储介质中存储有计算机指令,当该计算机指令在电子设备100上运行时,使得电子设备100执行上述相关方法步骤实现上述实施例中的视频拍摄方法。The SIM card interface 195 is used for connecting a SIM card. The SIM card can be connected and separated from the electronic device 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 . The electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1. SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card and so on. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the multiple cards may be the same or different. The SIM card interface 195 is also compatible with different types of SIM cards. The SIM card interface 195 is also compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as calling and data communication. In some embodiments, the electronic device 100 adopts an eSIM, that is, an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 . The embodiment of the present application also provides a computer storage medium, the computer storage medium stores computer instructions, and when the computer instructions are run on the electronic device 100, the electronic device 100 executes the above-mentioned related method steps to realize the video in the above-mentioned embodiment. Shooting method.
本申请实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的视频拍摄方法。An embodiment of the present application further provides a computer program product, which, when running on a computer, causes the computer to execute the above-mentioned related steps, so as to implement the video shooting method in the above-mentioned embodiment.
另外,本申请的实施例还提供一种装置,这个装置具体可以是芯片,组件或模块,该装置可包括相连的处理器和存储器;其中,存储器用于存储计算机执行指令,当装置运行时,处理器可执行存储器存储的计算机执行指令,以使芯片执行上述各方法实施例中的视频拍摄方法。In addition, an embodiment of the present application also provides a device, which may specifically be a chip, a component or a module, and the device may include a connected processor and a memory; wherein the memory is used to store computer-executable instructions, and when the device is running, The processor can execute the computer-executable instructions stored in the memory, so that the chip executes the video shooting method in the above method embodiments.
其中,本实施例提供的电子设备、计算机存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。Wherein, the electronic device, computer storage medium, computer program product or chip provided in this embodiment is all used to execute the corresponding method provided above, therefore, the beneficial effects it can achieve can refer to the corresponding method provided above The beneficial effects in the method will not be repeated here.
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。Through the description of the above embodiments, those skilled in the art can clearly understand that for the convenience and brevity of the description, only the division of the above-mentioned functional modules is used as an example for illustration. In practical applications, the above-mentioned functions can be allocated according to needs It is completed by different functional modules, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,该模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed devices and methods may be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be combined Or it can be integrated into another device, or some features can be omitted, or not implemented. In another point, the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
该作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The unit described as a separate component may or may not be physically separated, and a component displayed as a unit may be one physical unit or multiple physical units, that is, it may be located in one place, or may be distributed to multiple different places. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit. The above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
该集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a readable storage medium. Based on this understanding, the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the software product is stored in a storage medium Among them, several instructions are included to make a device (which may be a single-chip microcomputer, a chip, etc.) or a processor (processor) execute all or part of the steps of the methods in the various embodiments of the present application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes. .
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神和范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present application without limitation. Although the present application has been described in detail with reference to the preferred embodiments, those skilled in the art should understand that the technical solutions of the present application can be Make modifications or equivalent replacements without departing from the spirit and scope of the technical solutions of the present application.
Claims (13)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310006974.XA CN115802144B (en) | 2023-01-04 | 2023-01-04 | Video shooting method and related equipment |
CN202311128121.XA CN117336597B (en) | 2023-01-04 | 2023-01-04 | Video shooting method and related equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310006974.XA CN115802144B (en) | 2023-01-04 | 2023-01-04 | Video shooting method and related equipment |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311128121.XA Division CN117336597B (en) | 2023-01-04 | 2023-01-04 | Video shooting method and related equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115802144A CN115802144A (en) | 2023-03-14 |
CN115802144B true CN115802144B (en) | 2023-09-05 |
Family
ID=85428544
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310006974.XA Active CN115802144B (en) | 2023-01-04 | 2023-01-04 | Video shooting method and related equipment |
CN202311128121.XA Active CN117336597B (en) | 2023-01-04 | 2023-01-04 | Video shooting method and related equipment |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311128121.XA Active CN117336597B (en) | 2023-01-04 | 2023-01-04 | Video shooting method and related equipment |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN115802144B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010028606A (en) * | 2008-07-23 | 2010-02-04 | Olympus Imaging Corp | Image capturing apparatus and method of controlling the same |
CN103024165A (en) * | 2012-12-04 | 2013-04-03 | 华为终端有限公司 | Method and device for automatically setting shooting mode |
CN103907338A (en) * | 2011-11-14 | 2014-07-02 | 索尼公司 | Iimage display in three dimensional image capturing means used in two dimensional capture mode |
CN105530422A (en) * | 2014-09-30 | 2016-04-27 | 联想(北京)有限公司 | Electronic equipment, control method thereof, and control device |
CN107465856A (en) * | 2017-08-31 | 2017-12-12 | 广东欧珀移动通信有限公司 | Camera method, device and terminal equipment |
CN108111754A (en) * | 2017-12-18 | 2018-06-01 | 维沃移动通信有限公司 | A kind of method and mobile terminal of definite image acquisition modality |
CN111630836A (en) * | 2018-03-26 | 2020-09-04 | 华为技术有限公司 | Intelligent auxiliary control method and terminal equipment |
CN111770277A (en) * | 2020-07-31 | 2020-10-13 | RealMe重庆移动通信有限公司 | Auxiliary shooting method, terminal and storage medium |
CN113313626A (en) * | 2021-05-20 | 2021-08-27 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4421960B2 (en) * | 2003-08-15 | 2010-02-24 | 富士フイルム株式会社 | Image processing apparatus and method, and program |
US10313417B2 (en) * | 2016-04-18 | 2019-06-04 | Qualcomm Incorporated | Methods and systems for auto-zoom based adaptive video streaming |
US20180270445A1 (en) * | 2017-03-20 | 2018-09-20 | Samsung Electronics Co., Ltd. | Methods and apparatus for generating video content |
WO2019023915A1 (en) * | 2017-07-31 | 2019-02-07 | 深圳市大疆创新科技有限公司 | Video processing method, device, aircraft, and system |
CN112532859B (en) * | 2019-09-18 | 2022-05-31 | 华为技术有限公司 | Video acquisition method and electronic equipment |
CN113194242B (en) * | 2020-01-14 | 2022-09-20 | 荣耀终端有限公司 | A shooting method and mobile terminal in a telephoto scene |
-
2023
- 2023-01-04 CN CN202310006974.XA patent/CN115802144B/en active Active
- 2023-01-04 CN CN202311128121.XA patent/CN117336597B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010028606A (en) * | 2008-07-23 | 2010-02-04 | Olympus Imaging Corp | Image capturing apparatus and method of controlling the same |
CN103907338A (en) * | 2011-11-14 | 2014-07-02 | 索尼公司 | Iimage display in three dimensional image capturing means used in two dimensional capture mode |
CN103024165A (en) * | 2012-12-04 | 2013-04-03 | 华为终端有限公司 | Method and device for automatically setting shooting mode |
CN105530422A (en) * | 2014-09-30 | 2016-04-27 | 联想(北京)有限公司 | Electronic equipment, control method thereof, and control device |
CN107465856A (en) * | 2017-08-31 | 2017-12-12 | 广东欧珀移动通信有限公司 | Camera method, device and terminal equipment |
CN108111754A (en) * | 2017-12-18 | 2018-06-01 | 维沃移动通信有限公司 | A kind of method and mobile terminal of definite image acquisition modality |
CN111630836A (en) * | 2018-03-26 | 2020-09-04 | 华为技术有限公司 | Intelligent auxiliary control method and terminal equipment |
CN111770277A (en) * | 2020-07-31 | 2020-10-13 | RealMe重庆移动通信有限公司 | Auxiliary shooting method, terminal and storage medium |
CN113313626A (en) * | 2021-05-20 | 2021-08-27 | Oppo广东移动通信有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN115802144A (en) | 2023-03-14 |
CN117336597B (en) | 2024-11-12 |
CN117336597A (en) | 2024-01-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113194242B (en) | A shooting method and mobile terminal in a telephoto scene | |
CN109951633B (en) | Method for shooting moon and electronic equipment | |
US20230055623A1 (en) | Video shooting method and electronic device | |
WO2020093988A1 (en) | Image processing method and electronic device | |
CN113726950A (en) | Image processing method and electronic equipment | |
WO2021052111A1 (en) | Image processing method and electronic device | |
WO2023005298A1 (en) | Image content masking method and apparatus based on multiple cameras | |
WO2023016014A1 (en) | Video editing method and electronic device | |
CN114173189B (en) | Video editing method, electronic device and storage medium | |
CN116193275B (en) | Video processing method and related equipment | |
CN115802144B (en) | Video shooting method and related equipment | |
CN114466101B (en) | Display method and electronic equipment | |
CN118552452A (en) | Method for removing moire and related device | |
CN115460343A (en) | Image processing method, device and storage medium | |
CN115268742A (en) | Method for generating cover and electronic equipment | |
CN116095512B (en) | Photographing method of terminal equipment and related device | |
CN117764853B (en) | Face image enhancement method and electronic equipment | |
CN118446882B (en) | Picture background and text color adaptation method and related device | |
CN117692714B (en) | Video display method, electronic device, computer program product, and storage medium | |
CN118450269B (en) | Image processing method and electronic device | |
CN117499797B (en) | Image processing method and related equipment | |
CN117221713B (en) | Parameter loading method and electronic device | |
CN119767076A (en) | Video editing method and electronic device | |
CN118444805A (en) | Method and related device for displaying floating window | |
CN117692723A (en) | Video editing methods and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address | ||
CP03 | Change of name, title or address |
Address after: Unit 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong 518040 Patentee after: Honor Terminal Co.,Ltd. Country or region after: China Address before: 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong Patentee before: Honor Device Co.,Ltd. Country or region before: China |