[go: up one dir, main page]

CN111182202B - A wearable device-based content recognition method and wearable device - Google Patents

A wearable device-based content recognition method and wearable device Download PDF

Info

Publication number
CN111182202B
CN111182202B CN201911088886.9A CN201911088886A CN111182202B CN 111182202 B CN111182202 B CN 111182202B CN 201911088886 A CN201911088886 A CN 201911088886A CN 111182202 B CN111182202 B CN 111182202B
Authority
CN
China
Prior art keywords
content
host
wearable device
shooting
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911088886.9A
Other languages
Chinese (zh)
Other versions
CN111182202A (en
Inventor
施锐彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201911088886.9A priority Critical patent/CN111182202B/en
Publication of CN111182202A publication Critical patent/CN111182202A/en
Application granted granted Critical
Publication of CN111182202B publication Critical patent/CN111182202B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Health & Medical Sciences (AREA)
  • Vascular Medicine (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A content identification method based on a wearable device and the wearable device are provided, the wearable device comprises a smart host and a host support, the smart host can rotate at any angle within 360 degrees when standing up and being vertical to the host support, the method comprises the following steps: when the intelligent host is erected to be vertical to the host bracket, detecting whether a touch area of a user is positioned at the edge of a shooting visual angle of any shooting module of the intelligent host; if yes, controlling the intelligent host to rotate to the position where the shooting central point of any shooting module of the intelligent host is closest to the touch area when the intelligent host stands to be vertical to the host bracket; controlling any shooting module of the intelligent host with the shooting central point closest to the touch area to shoot the touch area to obtain a shot image; and performing content identification on the shot image. According to the embodiment of the application, the accuracy rate of content identification by using wearable equipment can be improved.

Description

一种基于可穿戴设备的内容识别方法及可穿戴设备A wearable device-based content recognition method and wearable device

技术领域technical field

本申请涉及可穿戴设备技术领域,尤其涉及一种基于可穿戴设备的内容识别方法及可穿戴设备。The present application relates to the technical field of wearable devices, and in particular, to a content recognition method based on a wearable device and a wearable device.

背景技术Background technique

当前,越来越多的可穿戴设备(如电话手表)设有摄像头,可以实现丰富的功能。在实践中发现,当利用可穿戴设备(如电话手表)的摄像头实现内容识别功能时,若用户对摄像头拍摄视角的触及区域位于拍摄视角的边缘,所拍取的拍摄图像容易产生畸变,导致拍摄图像中的内容难以被可穿戴设备(如电话手表)准确地识别,从而降低了内容识别的准确率。Currently, more and more wearable devices (such as phone watches) are equipped with cameras, which can realize rich functions. In practice, it is found that when the camera of a wearable device (such as a phone watch) is used to realize the content recognition function, if the user's touch area of the camera's shooting angle is located at the edge of the shooting angle of view, the captured image is easily distorted, causing the shooting The content in the image is difficult to be accurately recognized by wearable devices (such as phone watches), which reduces the accuracy of content recognition.

发明内容SUMMARY OF THE INVENTION

本申请实施例公开了一种基于可穿戴设备的内容识别方法及可穿戴设备,能够提升利用可穿戴设备进行内容识别的准确率。The embodiments of the present application disclose a wearable device-based content recognition method and a wearable device, which can improve the accuracy of content recognition using the wearable device.

本申请实施例第一方面公开一种基于可穿戴设备的内容识别方法,所述可穿戴设备包括智能主机和主机支架,所述智能主机在立起来与所述主机支架垂直时能够发生360°范围内的任意角度的旋转,所述方法包括:A first aspect of the embodiments of the present application discloses a content recognition method based on a wearable device, the wearable device includes a smart host and a host stand, and the smart host can generate a 360° range when standing vertically with the host stand Rotation at any angle within, the method includes:

在所述智能主机立起来与所述主机支架垂直时,检测用户的触及区域是否处于所述智能主机的任一拍摄模组的拍摄视角的边缘;When the smart host is upright and perpendicular to the host support, detecting whether the user's reach area is at the edge of the shooting angle of view of any shooting module of the smart host;

若是,控制所述智能主机在立起来与所述主机支架垂直时旋转至所述智能主机的任一拍摄模组的拍摄中心点离所述触及区域最近;If so, control the intelligent host to rotate to the point where the shooting center point of any shooting module of the intelligent host is closest to the touch area when it stands up and is perpendicular to the host bracket;

以及,控制所述拍摄中心点离所述触及区域最近的所述智能主机的任一拍摄模组对所述触及区域进行拍摄,获得拍摄图像;and, controlling any shooting module of the intelligent host whose shooting center point is closest to the touched area to photograph the touched area to obtain a photographed image;

对所述拍摄图像进行内容识别。Perform content recognition on the captured image.

作为一种可选的实施方式,在本申请实施例第一方面中,所述对所述拍摄图像进行内容识别之后,所述方法还包括:As an optional implementation manner, in the first aspect of the embodiment of the present application, after the content identification of the captured image, the method further includes:

输出所述拍摄图像的识别内容至所述智能主机的屏幕上显示;outputting the identification content of the captured image to be displayed on the screen of the smart host;

控制所述智能主机旋转至所述屏幕朝向所述可穿戴设备的佩戴者的人脸,以方便所述佩戴者查看所述拍摄图像的识别内容。The intelligent host is controlled to rotate so that the screen faces the face of the wearer of the wearable device, so as to facilitate the wearer to view the identification content of the captured image.

作为另一种可选的实施方式,在本申请实施例第一方面中,所述控制所述智能主机旋转至所述屏幕朝向所述可穿戴设备的佩戴者的人脸,以方便所述佩戴者查看所述拍摄图像的识别内容之后,所述方法还包括:As another optional implementation manner, in the first aspect of the embodiment of the present application, the control of the smart host to rotate so that the screen faces the face of the wearer of the wearable device, so as to facilitate the wearing After the user checks the identification content of the captured image, the method further includes:

获取所述佩戴者发出的操作指令;Obtain the operation instruction issued by the wearer;

若所述操作指令表示对所述识别内容进行点读,则提取所述识别内容中的文本内容;If the operation instruction indicates that the identification content is to be read, extract the text content in the identification content;

以及,获取所述佩戴者从所述文本内容中选择的点读内容;and, acquiring the click-through content selected by the wearer from the text content;

播报所述点读内容。The point-to-point content is broadcast.

作为另一种可选的实施方式,在本申请实施例第一方面中,所述方法还包括:As another optional implementation manner, in the first aspect of the embodiment of the present application, the method further includes:

若所述操作指令表示对所述识别内容进行搜题,则提取所述识别内容中的图文内容;If the operation instruction indicates that the identified content is to be searched, extracting the graphic and text content in the identified content;

从联网题库中搜索与所述图文内容相匹配的题目问答信息;Search the question and answer information that matches the content of the picture and text from the online question bank;

将所述题目问答信息输出至与所述可穿戴设备连接的学习设备上显示。Outputting the question and answer information to a learning device connected to the wearable device for display.

作为另一种可选的实施方式,在本申请实施例第一方面中,所述提取所述识别内容中的图文内容之后,以及从联网题库中搜索与所述图文内容相匹配的题目问答信息之前,所述方法还包括:As another optional implementation manner, in the first aspect of the embodiment of the present application, after extracting the graphic and text content in the identified content, a question matching the graphic and text content is searched from an online question bank Before the question and answer information, the method further includes:

识别所述图文内容的语义,根据所述语义判断所述图文内容是否包含完整题目信息;若不包含,调整所述智能主机的任一拍摄模组的拍摄视角至包含所述完整题目信息,执行从联网题库中搜索与所述图文内容相匹配的题目问答信息的步骤。Identify the semantics of the graphic content, and determine whether the graphic content contains complete title information according to the semantics; if not, adjust the shooting angle of any shooting module of the intelligent host to include the complete title information , and perform the step of searching the question and answer information that matches the content of the graphic and text from the online question bank.

本申请实施例第二方面公开一种可穿戴设备,所述可穿戴设备包括智能主机和主机支架,其特征在于,所述智能主机在立起来与所述主机支架垂直时能够发生360°范围内的任意角度的旋转,所述智能主机包括:A second aspect of the embodiments of the present application discloses a wearable device, the wearable device includes a smart host and a host stand, wherein the smart host can occur within a 360° range when the smart host is vertical to the host stand Rotation at any angle, the intelligent host includes:

触及检测单元,用于在所述智能主机立起来与所述主机支架垂直时,检测用户的触及区域是否处于所述智能主机的任一拍摄模组的拍摄视角的边缘;a touch detection unit, configured to detect whether the user's touch area is at the edge of the shooting angle of view of any shooting module of the smart host when the smart host is upright and perpendicular to the host bracket;

第一控制单元,用于当所述触及检测单元检测出用户的触及区域处于所述智能主机的任一拍摄模组的拍摄视角的边缘时,控制所述智能主机在立起来与所述主机支架垂直时旋转至所述智能主机的任一拍摄模组的拍摄中心点离所述触及区域最近;The first control unit is used to control the intelligent host to stand up and the host bracket when the contact detection unit detects that the contact area of the user is at the edge of the shooting angle of any shooting module of the intelligent host. When vertical, the shooting center point of any shooting module rotated to the intelligent host is closest to the touch area;

第二控制单元,用于控制所述拍摄中心点离所述触及区域最近的所述智能主机的任一拍摄模组对所述触及区域进行拍摄,获得拍摄图像;a second control unit, configured to control any photographing module of the intelligent host whose photographing center point is closest to the reach area to photograph the reach area to obtain a photographed image;

内容识别单元,用于对所述拍摄图像进行内容识别。A content recognition unit, configured to perform content recognition on the captured image.

作为一种可选的实施方式,在本申请实施例第二方面中,所述智能主机还包括:As an optional implementation manner, in the second aspect of the embodiment of the present application, the intelligent host further includes:

第一输出单元,用于在所述内容识别单元对所述拍摄图像进行内容识别之后,输出所述拍摄图像的识别内容至所述智能主机的屏幕上显示;a first output unit, configured to output the identification content of the captured image to display on the screen of the intelligent host after the content identification unit performs content identification on the captured image;

第三控制单元,用于控制所述智能主机旋转至所述屏幕朝向所述可穿戴设备的佩戴者的人脸,以方便所述佩戴者查看所述拍摄图像的识别内容。A third control unit, configured to control the intelligent host to rotate so that the screen faces the face of the wearer of the wearable device, so as to facilitate the wearer to view the identification content of the captured image.

作为另一种可选的实施方式,在本申请实施例第二方面中,所述智能主机还包括:As another optional implementation manner, in the second aspect of the embodiment of the present application, the intelligent host further includes:

第一获取单元,用于在所述第三控制单元控制所述智能主机旋转至所述屏幕朝向所述可穿戴设备的佩戴者的人脸,以方便所述佩戴者查看所述拍摄图像的识别内容之后,获取所述佩戴者发出的操作指令;a first acquisition unit, configured to control the smart host to rotate to the face of the wearer whose screen faces the wearable device in the third control unit, so as to facilitate the wearer to view the identification of the captured image After the content, obtain the operation instruction issued by the wearer;

第一提取单元,用于当所述第一获取单元获取的所述佩戴者发出的操作指令表示对所述识别内容进行点读时,提取所述识别内容中的文本内容;a first extracting unit, configured to extract the text content in the identification content when the operation instruction issued by the wearer obtained by the first obtaining unit indicates that the identification content is to be read;

第二获取单元,用于获取所述佩戴者从所述文本内容中选择的点读内容;a second acquiring unit, configured to acquire the click-through content selected by the wearer from the text content;

播报单元,用于播报所述点读内容。The broadcasting unit is used for broadcasting the point-to-point content.

作为另一种可选的实施方式,在本申请实施例第二方面中,所述智能主机还包括:As another optional implementation manner, in the second aspect of the embodiment of the present application, the intelligent host further includes:

第二提取单元,用于当所述第一获取单元获取的所述佩戴者发出的操作指令表示对所述识别内容进行搜题时,提取所述识别内容中的图文内容;a second extraction unit, configured to extract the graphic and text content in the identification content when the operation instruction issued by the wearer obtained by the first obtaining unit indicates that the identification content is to be searched;

联网搜索单元,用于从联网题库中搜索与所述图文内容相匹配的题目问答信息;An online search unit, used for searching the question and answer information matching the content of the graphic and text from the online question bank;

第二输出单元,用于将所述题目问答信息输出至与所述可穿戴设备连接的学习设备上显示。The second output unit is configured to output the question and answer information to a learning device connected to the wearable device for display.

作为另一种可选的实施方式,在本申请实施例第二方面中,所述智能主机还包括:As another optional implementation manner, in the second aspect of the embodiment of the present application, the intelligent host further includes:

语义识别单元,用于在所述第二提取单元提取所述识别内容中的图文内容之后,所述联网搜索单元从联网题库中搜索与所述图文内容相匹配的题目问答信息之前,识别所述图文内容的语义,根据所述语义判断所述图文内容是否包含完整题目信息;The semantic recognition unit is configured to, after the second extraction unit extracts the graphic and text content in the identified content, before the online search unit searches for question and answer information matching the graphic and text content from the online question bank, identify the The semantics of the graphic content, according to the semantics to determine whether the graphic content contains complete title information;

调整单元,用于当所述语义识别单元判断出所述图文内容不包含完整题目信息时,调整所述智能主机的任一拍摄模组的拍摄视角至包含所述完整题目信息,并触发所述联网搜索单元从联网题库中搜索与包含所述完整题目信息的所述图文内容相匹配的题目问答信息。An adjustment unit, configured to adjust the shooting angle of any shooting module of the intelligent host to include the complete title information when the semantic recognition unit determines that the graphic and text content does not contain complete title information, and trigger all The online search unit searches the online question bank for question and answer information that matches the graphic content including the complete question information.

本申请实施例第三方面公开了另一种可穿戴设备,所述可穿戴设备包括智能主机和主机支架,所述智能主机在立起来与所述主机支架垂直时能够发生360°范围内的任意角度的旋转,所述智能主机包括:A third aspect of the embodiments of the present application discloses another wearable device. The wearable device includes a smart host and a host support. When the smart host is vertical to the host support, any Angle of rotation, the intelligent host includes:

存储有可执行程序代码的存储器;a memory in which executable program code is stored;

与所述存储器耦合的处理器;a processor coupled to the memory;

所述处理器调用所述存储器中存储的所述可执行程序代码,执行本申请实施例第一方面公开的任意一种基于可穿戴设备的内容识别方法中的全部或部分步骤。The processor invokes the executable program code stored in the memory to execute all or part of the steps in any of the wearable device-based content identification methods disclosed in the first aspect of the embodiments of this application.

本申请实施例第四方面公开了一种计算机可读存储介质,其存储计算机程序,其中,所述计算机程序使得计算机执行本申请实施例第一方面公开的任意一种基于可穿戴设备的内容识别方法中的全部或部分步骤。A fourth aspect of the embodiments of the present application discloses a computer-readable storage medium, which stores a computer program, wherein the computer program enables a computer to execute any wearable device-based content recognition disclosed in the first aspect of the embodiments of the present application all or part of the steps in the method.

本申请实施例第五方面公开一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行本申请实施例第一方面的任意一种基于可穿戴设备的内容识别方法中的全部或部分步骤。A fifth aspect of the embodiments of the present application discloses a computer program product, which, when the computer program product runs on a computer, enables the computer to execute any one of the wearable device-based content identification methods of the first aspect of the embodiments of the present application all or part of the steps.

与现有技术相比,本申请实施例具有以下有益效果:本申请实施例中,可穿戴设备可以包括智能主机和主机支架,其中该智能主机在立起来与主机支架垂直时能够发生360°范围内的任意角度的旋转;在该智能主机立起来与主机支架垂直时,可以检测用户的触及区域是否处于该智能主机的任一拍摄模组的拍摄视角的边缘;若是,可以控制该智能主机在立起来与主机支架垂直时旋转至该智能主机的任一拍摄模组的拍摄中心点离上述触及区域最近;以及,可以控制该拍摄中心点离上述触及区域最近的智能主机的任一拍摄模组对上述触及区域进行拍摄,获得拍摄图像;在此基础上,对该拍摄图像进行内容识别。可见,实施本申请实施例,能够通过旋转可穿戴设备的智能主机来改变其拍摄模组的拍摄视角,从而使原本处于拍摄视角边缘的用户触及区域尽量靠近拍摄中心点,减少所拍取的拍摄图像产生的畸变,进而减少因畸变带来的对拍摄图像进行内容识别时的不利影响,从而能够提升利用可穿戴设备进行内容识别的准确率。Compared with the prior art, the embodiment of the present application has the following beneficial effects: in the embodiment of the present application, the wearable device may include a smart host and a host stand, wherein the smart host can have a 360° range when standing vertically with the host stand. Rotation at any angle within the smart host; when the smart host is upright and perpendicular to the host bracket, it can detect whether the user's reach area is at the edge of the shooting angle of any shooting module of the smart host; if so, it can control the smart host in When standing vertically with the host bracket, the shooting center point of any shooting module rotated to the intelligent host is closest to the above-mentioned reach area; and, any shooting module of the intelligent host whose shooting center point is closest to the above-mentioned reach area can be controlled The above-mentioned touched area is photographed to obtain a photographed image; on this basis, the content of the photographed image is recognized. It can be seen that by implementing the embodiment of the present application, the shooting angle of the shooting module of the wearable device can be changed by rotating the smart host of the wearable device, so that the user touch area originally at the edge of the shooting angle of view can be as close to the shooting center point as possible, reducing the number of shots taken. The distortion generated by the image, thereby reducing the adverse effect of the distortion on the content recognition of the captured image, can improve the accuracy of the content recognition using the wearable device.

附图说明Description of drawings

为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需要使用的附图作领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solutions in the embodiments of the present application more clearly, the following will refer to the drawings that are used in the embodiments to those of ordinary skill in the art. Without creative work, the drawings can also be obtained from these drawings. Additional drawings.

图1是本申请实施例公开的一种基于可穿戴设备的内容识别方法的流程示意图;FIG. 1 is a schematic flowchart of a content recognition method based on a wearable device disclosed in an embodiment of the present application;

图2是本申请实施例公开的另一种基于可穿戴设备的内容识别方法的流程示意图;2 is a schematic flowchart of another wearable device-based content recognition method disclosed in an embodiment of the present application;

图3是本申请实施例公开的又一种基于可穿戴设备的内容识别方法的流程示意图;3 is a schematic flowchart of another wearable device-based content recognition method disclosed in an embodiment of the present application;

图4是本申请实施例公开的一种可穿戴设备的结构示意图;4 is a schematic structural diagram of a wearable device disclosed in an embodiment of the present application;

图5是本申请实施例公开的另一种可穿戴设备的结构示意图;5 is a schematic structural diagram of another wearable device disclosed in an embodiment of the present application;

图6是本申请实施例公开的又一种可穿戴设备的结构示意图;6 is a schematic structural diagram of another wearable device disclosed in an embodiment of the present application;

图7是本申请实施例公开的又一种可穿戴设备的结构示意图。FIG. 7 is a schematic structural diagram of another wearable device disclosed in an embodiment of the present application.

具体实施方式Detailed ways

下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整的描述,显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. Obviously, the described embodiments are only a part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.

需要说明的是,本申请实施例的术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。It should be noted that the terms "comprising" and "having" and any modifications thereof in the embodiments of the present application are intended to cover non-exclusive inclusion, for example, a process, method, system, product or process including a series of steps or units. The apparatus is not necessarily limited to those steps or units expressly listed, but may include other steps or units not expressly listed or inherent to the process, method, product or apparatus.

此外,术语“安装”、“设置”、“设有”、“连接”、“相连”应做广义理解。例如,可以是固定连接,可拆卸连接,或整体式构造;可以是机械连接,或电连接;可以是直接相连,或者是通过中间媒介间接相连,又或者是两个装置、元件或组成部分之间内部的连通。对于本领域普通技术人员而言,可以根据具体情况理解上述术语在本申请中的具体含义。Furthermore, the terms "installed", "arranged", "provided", "connected", "connected" should be construed broadly. For example, it may be a fixed connection, a detachable connection, or a unitary structure; it may be a mechanical connection, or an electrical connection; it may be directly connected, or indirectly connected through an intermediary, or between two devices, elements, or components. internal communication. For those of ordinary skill in the art, the specific meanings of the above terms in this application can be understood according to specific situations.

本申请实施例公开了一种基于可穿戴设备的内容识别方法及可穿戴设备,能够提升利用可穿戴设备进行内容识别的准确率。以下结合附图进行详细描述。The embodiments of the present application disclose a wearable device-based content recognition method and a wearable device, which can improve the accuracy of content recognition using the wearable device. A detailed description is given below in conjunction with the accompanying drawings.

为了更好地理解本申请实施例公开的可穿戴设备的控制方法,下面先对适用该方法的一种可穿戴设备进行描述。适用本申请实施例公开的基于可穿戴设备的内容识别方法的一种可穿戴设备可以包括智能主机和主机支架以及侧带;其中,主机支架的第一端与侧带的第一端通过插拔方式连接,主机支架的第二端与侧带的第二端也通过插拔方式连接;其中,智能主机的第一端(转动端)通过转动球与主机支架的第一端活动连接,智能主机的第二端(自由端)无连接。在通常情况下,智能主机可以叠放在主机支架上,即智能主机的智能主机底侧与主机支架的上表面贴合;而当智能主机通过转动球实现相对于主机支架的不同角度的翻转时,智能主机的智能主机底侧与主机支架的上表面形成一定角度(该角度可以在0~180°之间调整)。In order to better understand the control method of the wearable device disclosed in the embodiment of the present application, a wearable device to which the method is applicable is first described below. A wearable device to which the content identification method based on a wearable device disclosed in the embodiment of the present application is applicable may include a smart host, a host bracket, and a side strap; wherein, the first end of the host bracket and the first end of the side strap are connected by plugging and unplugging. The second end of the host bracket and the second end of the side strap are also connected by plugging and unplugging; wherein, the first end (rotating end) of the intelligent host is movably connected to the first end of the host bracket through the rotating ball, and the intelligent host The second end (free end) is not connected. Under normal circumstances, the smart host can be stacked on the host bracket, that is, the bottom side of the smart host of the smart host is in contact with the upper surface of the host bracket; and when the smart host can be flipped at different angles relative to the host bracket by rotating the ball , the bottom side of the smart host of the smart host and the upper surface of the host bracket form a certain angle (the angle can be adjusted between 0 and 180°).

当智能主机在立起来与主机支架垂直(即智能主机的智能主机底侧与主机支架的上表面成90°)时,智能主机还可以通过该滚动球发生360°范围内的任意角度的旋转。When the smart host is vertical to the host bracket (that is, the bottom side of the smart host of the smart host and the upper surface of the host bracket are at 90°), the smart host can also rotate at any angle within 360° through the rolling ball.

需注意的是,以上所描述的可穿戴设备,仅是适用本申请实施例公开的基于可穿戴设备的内容识别方法的可穿戴设备的其中一种实现,而不应理解为对本申请的限制。It should be noted that the wearable device described above is only one implementation of the wearable device to which the wearable device-based content identification method disclosed in the embodiments of the present application is applicable, and should not be construed as a limitation of the present application.

请参阅图1,图1是本申请实施例公开的一种基于可穿戴设备的内容识别方法的流程示意图。其中,该可穿戴设备包括智能主机和主机支架,该智能主机在立起来与该主机支架垂直时能够发生360°范围内的任意角度的旋转。如图1所示,该内容识别方法可以包括以下步骤:Please refer to FIG. 1 . FIG. 1 is a schematic flowchart of a content recognition method based on a wearable device disclosed in an embodiment of the present application. Wherein, the wearable device includes a smart host and a host bracket, and the smart host can rotate at any angle within a range of 360° when the smart host is vertical to the host bracket. As shown in Figure 1, the content identification method may include the following steps:

101、在可穿戴设备的智能主机立起来与其主机支架垂直时,该可穿戴设备检测用户的触及区域是否处于其智能主机的任一拍摄模组的拍摄视角的边缘,若是,执行步骤102~步骤104;否则,结束本流程。101. When the smart host of the wearable device is erected perpendicular to its host stand, the wearable device detects whether the user's reach area is at the edge of the shooting angle of any shooting module of the wearable device, and if so, perform steps 102 to 102 104; otherwise, end the process.

本申请实施例中,在可穿戴设备的智能主机立起来与其主机支架垂直时,该智能主机的任一拍摄模组可以处于预拍摄状态,并输出其拍摄视角至该智能主机的屏幕上显示。举例来说,可以是该智能主机的后置拍摄模组处于预拍摄状态,并输出其拍摄视角至该智能主机的屏幕上显示;又举例来说,可以是该智能主机的前置拍摄模组和后置拍摄模组同时处于预拍摄状态,在此基础上,若任一拍摄模组检测到该可穿戴设备的佩戴者的人脸,则输出另一拍摄模组的拍摄视角至该智能主机的屏幕上显示。In the embodiment of the present application, when the smart host of the wearable device is erected perpendicular to its host bracket, any shooting module of the smart host can be in a pre-shooting state, and output its shooting angle to display on the screen of the smart host. For example, it can be that the rear camera module of the smart host is in a pre-shooting state, and outputs its shooting angle to display on the screen of the smart host; for another example, it can be the front camera module of the smart host It is in the pre-shooting state at the same time as the rear camera module. On this basis, if any camera module detects the face of the wearer of the wearable device, it outputs the camera angle of the other camera module to the smart host. displayed on the screen.

进一步的,上述检测用户的触及区域是否处于其智能主机的任一拍摄模组的拍摄视角的边缘,具体可以包括:检测用户对该智能主机的屏幕的触及操作;当检测到上述触及操作时,获取该触及操作的触及区域;判断该触及区域与上述智能主机的任一拍摄模组的拍摄中心点的距离是否大于预设的某一阈值,若大于,可以判断出该触及区域处于上述智能主机的任一拍摄模组的拍摄视角的边缘,进而执行步骤102~步骤104。Further, the above-mentioned detection of whether the touched area of the user is at the edge of the shooting angle of view of any shooting module of its smart host may specifically include: detecting the user's touching operation on the screen of the smart host; when detecting the above-mentioned touching operation, Obtain the touch area of the touch operation; determine whether the distance between the touch area and the shooting center point of any shooting module of the above-mentioned smart host is greater than a preset threshold, if it is greater than, it can be judged that the touch area is in the above-mentioned smart host. the edge of the shooting angle of view of any shooting module, and then step 102 to step 104 are performed.

102、可穿戴设备控制其智能主机在立起来与其主机支架垂直时旋转至该智能主机的任一拍摄模组的拍摄中心点离上述触及区域最近。102. The wearable device controls its smart host to rotate when the smart host stands upright and its host stand is perpendicular to the host stand to rotate so that the shooting center point of any shooting module of the smart host is closest to the above-mentioned touch area.

本申请实施例中,可穿戴设备可以根据上述触及区域的坐标信息以及其智能主机的当前旋转角度(360°范围内),计算该智能主机的目标旋转方向和目标旋转角度;以及,根据上述目标旋转方向和目标旋转角度,控制该智能主机进行对应的旋转,以使该智能主机的任一拍摄模组的拍摄中心点离上述触及区域最近。In the embodiment of the present application, the wearable device can calculate the target rotation direction and target rotation angle of the smart host according to the coordinate information of the above-mentioned reach area and the current rotation angle (within a 360° range) of the smart host; and, according to the above target The rotation direction and the target rotation angle are used to control the intelligent host to perform corresponding rotation, so that the shooting center point of any shooting module of the intelligent host is closest to the above-mentioned touch area.

通过实施上述方法,能够使原本处于拍摄视角边缘的用户触及区域尽量靠近拍摄中心点,减少所拍取的拍摄图像产生的畸变,进而在执行接下来的步骤103~步骤104时,减少因畸变带来的对拍摄图像进行内容识别的不利影响,从而能够提升利用可穿戴设备进行内容识别的准确率。By implementing the above method, the user-touched area that is originally at the edge of the shooting angle of view can be made as close to the shooting center point as possible, reducing the distortion of the captured image, and further reducing the distortion caused by the following steps 103 to 104. Therefore, it is possible to improve the accuracy of content recognition using the wearable device.

103、可穿戴设备控制上述拍摄中心点离上述触及区域最近的智能主机的任一拍摄模组对上述触及区域进行拍摄,获得拍摄图像。103. The wearable device controls any photographing module of the smart host whose photographing center point is closest to the reach area to photograph the reach area to obtain a photographed image.

作为一种可选的实施方式,可穿戴设备可以检测用户发出的语音关键词,并根据该语音关键词执行对应的拍摄操作。例如,当检测到语音关键词“拍照”时,可以控制上述拍摄中心点离上述触及区域最近的智能主机的任一拍摄模组对上述触及区域进行拍摄,获得拍摄图像;又例如,当检测到语音关键词“连拍”时,可以控制上述任一拍摄模组按照预设的拍摄次数和拍摄时间间隔进行连续拍摄;再例如,当检测到语音关键词“2秒后”以及“拍照”时,可以控制上述任一拍摄模组延迟2秒后进行拍摄。As an optional implementation manner, the wearable device may detect a voice keyword sent by the user, and perform a corresponding shooting operation according to the voice keyword. For example, when the voice keyword "photograph" is detected, any shooting module of the smart host whose shooting center point is closest to the above-mentioned touched area can be controlled to photograph the above-mentioned touched area to obtain a photographed image; When the voice keyword is "continuous shooting", any of the above shooting modules can be controlled to shoot continuously according to the preset shooting times and shooting time interval; for another example, when the voice keyword "after 2 seconds" and "photographing" are detected , you can control any of the above shooting modules to shoot after a delay of 2 seconds.

作为另一种可选的实施方式,可穿戴设备还可以根据用户对该智能主机的屏幕的触及操作执行对应的拍摄操作。例如,当上述触及操作为单次触及时,可以控制上述拍摄中心点离上述触及区域最近的智能主机的任一拍摄模组对上述触及区域进行拍摄,获得拍摄图像;又例如,当上述触及操作为双击时,可以控制上述任一拍摄模组延迟预设时间后进行拍摄;再例如,当上述触及操作为长按时,可以控制上述任一拍摄模组自动对焦后进行拍摄。As another optional implementation manner, the wearable device may also perform a corresponding shooting operation according to the user's touching operation on the screen of the smart host. For example, when the above-mentioned touching operation is a single touch, any shooting module of the smart host whose shooting center point is closest to the above-mentioned touching area can be controlled to photograph the above-mentioned touching area to obtain a photographed image; for another example, when the above-mentioned touching operation is performed When it is a double-click, any one of the above-mentioned shooting modules can be controlled to shoot after a preset time delay; for another example, when the above-mentioned touch operation is a long press, one of the above-mentioned shooting modules can be controlled to automatically focus before shooting.

104、可穿戴设备对该拍摄图像进行内容识别。104. The wearable device performs content recognition on the captured image.

本申请实施例中,上述可穿戴设备对该拍摄图像进行内容识别,具体可以包括:将该拍摄图像依次通过预处理模型和内容识别模型,获得该拍摄图像的识别内容。其中,上述预处理模型可以执行图文分割、灰度化、二值化、降噪、纠偏、字符切割、文本合并等步骤,本申请实施例不作具体限定。In the embodiment of the present application, the above-mentioned wearable device performing content recognition on the captured image may specifically include: sequentially passing the captured image through a preprocessing model and a content recognition model to obtain the recognition content of the captured image. The above preprocessing model may perform steps such as image and text segmentation, grayscale, binarization, noise reduction, deviation correction, character cutting, and text merging, which are not specifically limited in this embodiment of the present application.

可见,实施图1所描述的内容识别方法,能够提升利用可穿戴设备进行内容识别的准确率。It can be seen that implementing the content recognition method described in FIG. 1 can improve the accuracy of content recognition using the wearable device.

请参阅图2,图2是本申请实施例公开的另一种基于可穿戴设备的内容识别方法的流程示意图。其中,该可穿戴设备包括智能主机和主机支架,该智能主机在立起来与该主机支架垂直时能够发生360°范围内的任意角度的旋转。如图2所示,该内容识别方法可以包括以下步骤:Please refer to FIG. 2. FIG. 2 is a schematic flowchart of another method for content recognition based on a wearable device disclosed in an embodiment of the present application. Wherein, the wearable device includes a smart host and a host bracket, and the smart host can rotate at any angle within a range of 360° when the smart host is vertical to the host bracket. As shown in Figure 2, the content identification method may include the following steps:

201、在可穿戴设备的智能主机立起来与其主机支架垂直时,该可穿戴设备检测用户的触及区域是否处于其智能主机的任一拍摄模组的拍摄视角的边缘,若是,执行步骤202~步骤210;否则,结束本流程。201. When the smart host of the wearable device is erected perpendicular to its host stand, the wearable device detects whether the user's reach area is at the edge of the shooting angle of any shooting module of the wearable device, and if so, perform steps 202 to 202. 210; otherwise, end the process.

202、可穿戴设备控制其智能主机在立起来与其主机支架垂直时旋转至该智能主机的任一拍摄模组的拍摄中心点离上述触及区域最近。202 , the wearable device controls its smart host to rotate when the smart host stands upright and its host stand is perpendicular to the host stand, so that the shooting center point of any shooting module of the smart host is closest to the above-mentioned touch area.

203、可穿戴设备控制上述拍摄中心点离上述触及区域最近的智能主机的任一拍摄模组对上述触及区域进行拍摄,获得拍摄图像。203. The wearable device controls any shooting module of the smart host whose shooting center point is closest to the touched area to photograph the touched area to obtain a photographed image.

204、可穿戴设备对该拍摄图像进行内容识别。204. The wearable device performs content recognition on the captured image.

205、可穿戴设备输出该拍摄图像的识别内容至其智能主机的屏幕上显示。205. The wearable device outputs the identification content of the captured image to display on the screen of its smart host.

作为一种可选的实施方式,可穿戴设备还可以输出该拍摄图像的识别内容至该可穿戴设备连接的学习设备上显示。示例性的,上述学习设备可以包括具备显示屏的各类设备或系统(如学习平板、带屏智能音箱等),本申请实施例不作具体限定。As an optional implementation manner, the wearable device may also output the identification content of the captured image to be displayed on a learning device connected to the wearable device. Exemplarily, the above learning device may include various types of devices or systems having a display screen (eg, a learning tablet, a smart speaker with a screen, etc.), which is not specifically limited in the embodiment of the present application.

206、可穿戴设备控制其智能主机旋转至上述屏幕朝向该可穿戴设备的佩戴者的人脸,以方便该佩戴者查看上述拍摄图像的识别内容。206. The wearable device controls its smart host to rotate so that the above-mentioned screen faces the face of the wearer of the wearable device, so as to facilitate the wearer to view the identification content of the above-mentioned captured image.

举例来说,可穿戴设备可以判断上述智能主机的任一拍摄模组是否检测到该可穿戴设备的佩戴者的人脸;若检测到,获取该佩戴者的人脸所处的方向,并控制上述智能主机旋转至该方向,从而使上述智能主机的屏幕朝向该佩戴者的人脸,以方便该佩戴者查看上述拍摄图像的识别内容。For example, the wearable device can determine whether any shooting module of the above-mentioned smart host detects the face of the wearer of the wearable device; if detected, obtain the direction of the wearer's face, and control the The smart host is rotated to this direction, so that the screen of the smart host faces the face of the wearer, so that the wearer can view the identification content of the captured image.

又举例来说,可穿戴设备还可以检测用户发出的语音指令,并判断该语音指令的声纹特征是否与该可穿戴设备的佩戴者的声纹特征相匹配;若相匹配,获取发出该语音指令的音源方向,并控制上述智能主机旋转至该方向。For another example, the wearable device can also detect the voice command issued by the user, and determine whether the voiceprint feature of the voice command matches the voiceprint feature of the wearer of the wearable device; if it matches, obtain and issue the voice. The direction of the sound source of the command, and control the above-mentioned smart host to rotate to this direction.

207、可穿戴设备获取该佩戴者发出的操作指令。207. The wearable device acquires the operation instruction issued by the wearer.

示例性的,可穿戴设备可以检测上述佩戴者发出的语音指令,并通过关键词检测技术或自然语言处理技术获取该语音指令对应的操作指令;可穿戴设备也可以检测上述佩戴者对该可穿戴设备的智能主机的屏幕的触及操作,并获取该触及操作的触及区域对应的操作指令。Exemplarily, the wearable device can detect the voice command issued by the wearer, and obtain the operation command corresponding to the voice command through keyword detection technology or natural language processing technology; the wearable device can also detect the wearer's voice command. The touch operation on the screen of the smart host of the device, and the operation instruction corresponding to the touch area of the touch operation is obtained.

208、若该操作指令表示对上述识别内容进行点读,则可穿戴设备提取上述识别内容中的文本内容。208. If the operation instruction indicates that the above-mentioned identification content is to be read, the wearable device extracts the text content in the above-mentioned identification content.

209、可穿戴设备获取其佩戴者从该文本内容中选择的点读内容。209. The wearable device acquires the click-through content selected by the wearer from the text content.

210、可穿戴设备播报该点读内容。210. The wearable device broadcasts the read content.

通过实施上述方法,可穿戴设备可以在对拍摄图像进行内容识别的基础上实现对识别内容的点读功能,从而能够拓展可穿戴设备的使用场景,提升其实用性。By implementing the above method, the wearable device can realize the point-to-read function of the recognized content on the basis of performing content recognition on the captured image, thereby expanding the usage scenarios of the wearable device and improving its practicability.

作为一种可选的实施方式,可穿戴设备在执行步骤201~步骤210,播报上述点读内容之后,还可以执行以下步骤:当上述点读内容为英文单词时,可以联网搜索该英文单词的相关信息,其中,该相关信息可以包括该英文单词的读音、翻译、例句;输出上述相关信息至该可穿戴设备的智能主机的屏幕上,以浮窗的形式显示在该英文单词附近;依次播报该相关信息;进一步的,可穿戴设备还可以检测其佩戴者对该英文单词的跟读,并判断跟读是否正确,输出相应的评分。通过实施上述方法,能够帮助可穿戴设备的佩戴者便捷地查阅和翻译英文学习资料,同时对不熟识的英文单词进行学习,提升了可穿戴设备的实用性。As an optional implementation manner, after the wearable device performs steps 201 to 210 to broadcast the above-mentioned point-to-read content, the wearable device may also perform the following steps: when the above-mentioned point-to-read content is an English word, it can search for the English word on the Internet. Relevant information, wherein the relevant information can include the pronunciation, translation, and example sentences of the English word; output the above-mentioned relevant information to the screen of the smart host of the wearable device, and display it near the English word in the form of a floating window; broadcast in turn The relevant information; further, the wearable device can also detect the follow-up reading of the English word by the wearer, judge whether the follow-up reading is correct, and output a corresponding score. By implementing the above method, the wearer of the wearable device can be helped to conveniently consult and translate English learning materials, and at the same time, unfamiliar English words can be learned, which improves the practicability of the wearable device.

可见,实施图2所描述的内容识别方法,能够提升利用可穿戴设备进行内容识别的准确率。It can be seen that, implementing the content recognition method described in FIG. 2 can improve the accuracy of content recognition using the wearable device.

此外,实施图2所描述的内容识别方法,能够利用可穿戴设备实现点读功能,从而拓展了可穿戴设备的使用场景,提升了其实用性。In addition, by implementing the content recognition method described in FIG. 2 , the wearable device can be used to realize the point-to-point reading function, thereby expanding the usage scenarios of the wearable device and improving its practicability.

请参阅图3,图3是本申请实施例公开的又一种基于可穿戴设备的内容识别方法的流程示意图。其中,该可穿戴设备包括智能主机和主机支架,该智能主机在立起来与该主机支架垂直时能够发生360°范围内的任意角度的旋转。如图3所示,该内容识别方法可以包括以下步骤:Please refer to FIG. 3 . FIG. 3 is a schematic flowchart of another method for content recognition based on a wearable device disclosed in an embodiment of the present application. Wherein, the wearable device includes a smart host and a host bracket, and the smart host can rotate at any angle within a range of 360° when the smart host is vertical to the host bracket. As shown in Figure 3, the content identification method may include the following steps:

301、在可穿戴设备的智能主机立起来与其主机支架垂直时,该可穿戴设备检测用户的触及区域是否处于其智能主机的任一拍摄模组的拍摄视角的边缘,若是,执行步骤302~步骤309;否则,结束本流程。301. When the smart host of the wearable device is erected perpendicular to its host bracket, the wearable device detects whether the user's reach area is at the edge of the shooting angle of any shooting module of the wearable device, and if so, perform steps 302 to 301. 309; otherwise, end the process.

302、可穿戴设备控制其智能主机在立起来与其主机支架垂直时旋转至该智能主机的任一拍摄模组的拍摄中心点离上述触及区域最近。302 , the wearable device controls its smart host to rotate when the smart host stands upright to its host stand to rotate so that the shooting center point of any shooting module of the smart host is closest to the above-mentioned touch area.

303、可穿戴设备控制上述拍摄中心点离上述触及区域最近的智能主机的任一拍摄模组对上述触及区域进行拍摄,获得拍摄图像。303. The wearable device controls any shooting module of the smart host whose shooting center point is closest to the touched area to photograph the touched area to obtain a photographed image.

304、可穿戴设备对该拍摄图像进行内容识别。304. The wearable device performs content recognition on the captured image.

305、可穿戴设备输出该拍摄图像的识别内容至其智能主机的屏幕上显示。305. The wearable device outputs the identification content of the captured image to display on the screen of its smart host.

306、可穿戴设备控制其智能主机旋转至上述屏幕朝向该可穿戴设备的佩戴者的人脸,以方便该佩戴者查看上述拍摄图像的识别内容。306. The wearable device controls its smart host to rotate so that the above-mentioned screen faces the face of the wearer of the wearable device, so as to facilitate the wearer to view the identification content of the above-mentioned captured image.

307、可穿戴设备获取该佩戴者发出的操作指令。307. The wearable device acquires the operation instruction issued by the wearer.

308、若该操作指令表示对上述识别内容进行搜题,则可穿戴设备提取上述识别内容中的图文内容。308. If the operation instruction indicates that the above-mentioned identification content is to be searched, the wearable device extracts the graphic and text content in the above-mentioned identification content.

309、可穿戴设备识别该图文内容的语义,根据该语义判断该图文内容是否包含完整题目信息,若不包含,执行步骤310~步骤312;否则,执行步骤311~步骤312。309. The wearable device recognizes the semantics of the graphic content, and judges whether the graphic content contains complete topic information according to the semantics. If not, perform steps 310 to 312; otherwise, perform steps 311 to 312.

示例性的,当可穿戴设备检测到该图文内容出现不完整字符或不符合预设语法标准的语句时,可以判断出该图文内容不包含完整题目信息。Exemplarily, when the wearable device detects that the graphic content contains incomplete characters or sentences that do not meet the preset grammatical standard, it can be determined that the graphic content does not contain complete topic information.

310、可穿戴设备调整其智能主机的任一拍摄模组的拍摄视角至包含上述完整题目信息。310. The wearable device adjusts the shooting angle of any shooting module of its smart host to include the above-mentioned complete topic information.

示例性的,可穿戴设备可以扩大其智能主机的任一拍摄模组的拍摄视角,以使其包含上述完整题目信息;也可以旋转其智能主机,并在旋转过程中持续监测任一拍摄模组的拍摄视角,直至其包含上述完整题目信息。Exemplarily, the wearable device can expand the shooting angle of any shooting module of its smart host to include the above-mentioned complete topic information; it can also rotate its smart host, and continuously monitor any shooting module during the rotation process. , until it contains the complete topic information above.

311、可穿戴设备从联网题库中搜索与上述图文内容相匹配的题目问答信息。311. The wearable device searches the online question bank for question and answer information that matches the above-mentioned graphic and text content.

312、可穿戴设备将该题目问答信息输出至与该可穿戴设备连接的学习设备上显示。312. The wearable device outputs the question and answer information to the learning device connected to the wearable device for display.

本申请实施例中,通过执行上述步骤308~步骤312,可穿戴设备能够实现拍照搜题的功能,从而帮助该可穿戴设备的佩戴者进行自主学习,提升学习积极性。In the embodiment of the present application, by performing the above steps 308 to 312, the wearable device can realize the function of taking pictures and searching for questions, thereby helping the wearer of the wearable device to perform autonomous learning and improving learning enthusiasm.

作为一种可选的实施方式,可穿戴设备在执行上述步骤308时,若所获取该可穿戴设备的佩戴者发出的操作指令表示对上述识别内容进行搜题并批改,则还可以执行以下步骤:判别并分别提取上述识别内容中的题目信息和作答信息;从联网题库中搜索与上述题目信息相匹配的题目问答信息;根据该题目问答信息,对上述作答信息进行批改,获得批改结果;将该批改结果输出至与该可穿戴设备连接的学习设备上显示。As an optional implementation manner, when the wearable device performs the above step 308, if the obtained operation instruction issued by the wearer of the wearable device indicates that the above identification content is to be searched and corrected, the following steps may also be performed : distinguish and extract the question information and answer information in the above-mentioned identification content respectively; search the question and answer information matching the above question information from the online question bank; according to the question and answer information, correct the above answer information to obtain the correction result; The correction result is output and displayed on the learning device connected with the wearable device.

进一步的,若上述批改结果表示上述作答信息存在错误,还可以在上述学习设备上高亮显示存在错误的作答信息,以方便用户查看。Further, if the above correction result indicates that there is an error in the above answering information, the wrong answering information may also be highlighted on the above learning device to facilitate the user to view.

通过实施上述方法,能够进一步帮助可穿戴设备的佩戴者进行自主学习,培养该佩戴者独立解决问题的能力和积极性。By implementing the above method, the wearer of the wearable device can be further helped to perform autonomous learning, and the ability and enthusiasm of the wearer to solve problems independently can be cultivated.

可见,实施图3所描述的内容识别方法,能够提升利用可穿戴设备进行内容识别的准确率。It can be seen that implementing the content recognition method described in FIG. 3 can improve the accuracy of content recognition using the wearable device.

此外,实施图3所描述的内容识别方法,能够利用可穿戴设备实现搜题功能,从而帮助该可穿戴设备的佩戴者进行自主学习,提升学习积极性。In addition, by implementing the content recognition method described in FIG. 3 , the wearable device can be used to realize a question search function, thereby helping the wearer of the wearable device to perform autonomous learning and improving learning enthusiasm.

请参阅图4,图4是本申请实施例公开的一种可穿戴设备的结构示意图。其中,该可穿戴设备包括智能主机和主机支架,该智能主机在立起来与该主机支架垂直时能够发生360°范围内的任意角度的旋转。如图4所示,该可穿戴设备的智能主机可以包括:Please refer to FIG. 4 , which is a schematic structural diagram of a wearable device disclosed in an embodiment of the present application. Wherein, the wearable device includes a smart host and a host bracket, and the smart host can rotate at any angle within a range of 360° when the smart host is vertical to the host bracket. As shown in Figure 4, the smart host of the wearable device may include:

触及检测单元401,用于在可穿戴设备的智能主机立起来与其主机支架垂直时,检测用户的触及区域是否处于该智能主机的任一拍摄模组的拍摄视角的边缘;The touch detection unit 401 is used to detect whether the touch area of the user is at the edge of the shooting angle of view of any shooting module of the smart host when the smart host of the wearable device is erected perpendicular to its host bracket;

第一控制单元402,用于当上述触及检测单元401检测出用户的触及区域处于上述智能主机的任一拍摄模组的拍摄视角的边缘时,控制该智能主机在立起来与上述主机支架垂直时旋转至该智能主机的任一拍摄模组的拍摄中心点离上述触及区域最近;The first control unit 402 is used to control the intelligent host to stand up and be perpendicular to the above-mentioned host bracket when the above-mentioned touch detection unit 401 detects that the touched area of the user is at the edge of the shooting angle of any shooting module of the above-mentioned intelligent host. The shooting center point of any shooting module rotated to the intelligent host is closest to the above-mentioned touch area;

第二控制单元403,用于控制上述拍摄中心点离上述触及区域最近的智能主机的任一拍摄模组对该触及区域进行拍摄,获得拍摄图像;The second control unit 403 is configured to control any shooting module of the intelligent host whose shooting center point is closest to the touched area to photograph the touched area to obtain a photographed image;

内容识别单元404,用于对该拍摄图像进行内容识别。The content recognition unit 404 is configured to perform content recognition on the captured image.

可见,实施图4所描述的可穿戴设备,能够使原本处于拍摄视角边缘的用户触及区域尽量靠近拍摄中心点,减少所拍取的拍摄图像产生的畸变,进而在内容识别单元404对该拍摄图像进行内容识别时,减少因畸变带来的不利影响,从而能够提升利用可穿戴设备进行内容识别的准确率。It can be seen that, implementing the wearable device described in FIG. 4 can make the user touch area that is originally at the edge of the shooting angle of view as close to the shooting center point as possible, reduce the distortion of the captured image, and then use the content recognition unit 404 for the captured image. When performing content recognition, the adverse effects caused by distortion are reduced, so that the accuracy of content recognition using wearable devices can be improved.

请一并参阅图5,图5是本申请实施例公开的另一种可穿戴设备的结构示意图。其中,图5所示的可穿戴设备是由图4所示的可穿戴设备进行优化得到的。与图4所示的可穿戴设备相比较,图5所示的可穿戴设备的智能主机还包括:Please also refer to FIG. 5 , which is a schematic structural diagram of another wearable device disclosed in an embodiment of the present application. The wearable device shown in FIG. 5 is obtained by optimizing the wearable device shown in FIG. 4 . Compared with the wearable device shown in Figure 4, the smart host of the wearable device shown in Figure 5 further includes:

第一输出单元405,用于在上述内容识别单元404对上述拍摄图像进行内容识别之后,输出该拍摄图像的识别内容至可穿戴设备的智能主机的屏幕上显示;The first output unit 405 is configured to output the identification content of the captured image to display on the screen of the smart host of the wearable device after the content recognition unit 404 performs content recognition on the captured image;

第三控制单元406,用于控制上述智能主机旋转至上述屏幕朝向该可穿戴设备的佩戴者的人脸,以方便该佩戴者查看上述拍摄图像的识别内容。The third control unit 406 is configured to control the above-mentioned intelligent host to rotate so that the above-mentioned screen faces the face of the wearer of the wearable device, so as to facilitate the wearer to view the identification content of the above-mentioned captured image.

第一获取单元407,用于在上述第三控制单元406控制上述智能主机旋转至上述屏幕朝向该可穿戴设备的佩戴者的人脸,以方便该佩戴者查看上述拍摄图像的识别内容之后,获取该佩戴者发出的操作指令;The first obtaining unit 407 is configured to obtain the above-mentioned screen after the third control unit 406 controls the above-mentioned smart host to rotate to the face of the wearer whose screen is facing the wearable device, so as to facilitate the wearer to view the identification content of the above-mentioned captured image. the operating instructions issued by the wearer;

第一提取单元408,用于当上述第一获取单元407获取的该佩戴者发出的操作指令表示对上述识别内容进行点读时,提取该识别内容中的文本内容;The first extracting unit 408 is configured to extract the text content in the identification content when the operation instruction issued by the wearer obtained by the first obtaining unit 407 indicates that the identification content is to be read;

第二获取单元409,用于获取该佩戴者从该文本内容中选择的点读内容;The second obtaining unit 409 is configured to obtain the click-through content selected by the wearer from the text content;

播报单元410,用于播报该点读内容。The broadcasting unit 410 is used for broadcasting the point-to-point content.

作为一种可选的实施方式,在播报单元410播报上述点读内容之后,若该点读内容为英文单词,该可穿戴设备还可以:联网搜索该英文单词的相关信息,其中,该相关信息可以包括该英文单词的读音、翻译、例句;在此基础上,第一输出单元405输出上述相关信息至该可穿戴设备的智能主机的屏幕上,以浮窗的形式显示在该英文单词附近;播报单元410依次播报该相关信息;进一步的,可穿戴设备还可以检测其佩戴者对该英文单词的跟读,并判断跟读是否正确,通过第一输出单元405或播报单元410输出相应的评分。通过实施上述方法,能够在利用可穿戴设备实现点读功能的基础上,帮助该可穿戴设备的佩戴者便捷地查阅和翻译英文学习资料,同时对不熟识的英文单词进行学习,提升了可穿戴设备的实用性。As an optional implementation manner, after the broadcasting unit 410 broadcasts the above-mentioned point reading content, if the point reading content is an English word, the wearable device can also: search online for relevant information of the English word, wherein the relevant information Can include the pronunciation, translation, and example sentences of the English word; on this basis, the first output unit 405 outputs the above-mentioned relevant information to the screen of the smart host of the wearable device, and is displayed near the English word in the form of a floating window; The broadcast unit 410 broadcasts the relevant information in turn; further, the wearable device can also detect the follow-up reading of the English word by its wearer, and judge whether the follow-up reading is correct, and output the corresponding score through the first output unit 405 or the broadcast unit 410 . By implementing the above method, on the basis of using the wearable device to realize the point-to-read function, the wearer of the wearable device can be helped to conveniently consult and translate English learning materials, and at the same time, unfamiliar English words can be learned, which improves the wearability of the wearable device. Availability of equipment.

可见,实施图5所描述的可穿戴设备,能够提升利用可穿戴设备进行内容识别的准确率。It can be seen that implementing the wearable device described in FIG. 5 can improve the accuracy of content recognition using the wearable device.

此外,实施图5所描述的可穿戴设备,能够利用可穿戴设备实现点读功能,从而拓展了可穿戴设备的使用场景,提升了其实用性In addition, by implementing the wearable device described in Figure 5, the wearable device can be used to realize the point-and-read function, thereby expanding the usage scenarios of the wearable device and improving its practicability

请一并参阅图6,图6是本申请实施例公开的又一种可穿戴设备的结构示意图。其中,图6所示的可穿戴设备是由图5所示的可穿戴设备进行优化得到的。与图5所示的可穿戴设备相比较,图6所示的可穿戴设备的智能主机还包括:Please refer to FIG. 6 together. FIG. 6 is a schematic structural diagram of another wearable device disclosed in an embodiment of the present application. The wearable device shown in FIG. 6 is obtained by optimizing the wearable device shown in FIG. 5 . Compared with the wearable device shown in Figure 5, the smart host of the wearable device shown in Figure 6 further includes:

第二提取单元411,用于当上述第一获取单元407获取的该佩戴者发出的操作指令表示对上述识别内容进行搜题时,提取该识别内容中的图文内容;The second extraction unit 411 is configured to extract the graphic and text content in the identification content when the operation instruction issued by the wearer obtained by the first obtaining unit 407 indicates that the identification content is to be searched;

语义识别单元412,用于在上述第二提取单元411提取该识别内容中的图文内容之后,识别该图文内容的语义,根据该语义判断该图文内容是否包含完整题目信息;The semantic recognition unit 412 is used for identifying the semantics of the graphic content after the above-mentioned second extraction unit 411 extracts the graphic content in the identified content, and judges whether the graphic content contains complete title information according to the semantics;

调整单元413,用于当上述语义识别单元412判断出该图文内容不包含完整题目信息时,调整可穿戴设备的智能主机的任一拍摄模组的拍摄视角至包含该完整题目信息;The adjustment unit 413 is used to adjust the shooting angle of any shooting module of the smart host of the wearable device to include the complete title information when the above-mentioned semantic recognition unit 412 judges that the graphic content does not contain the complete title information;

联网搜索单元414,用于当上述语义识别单元412判断出该图文内容包含完整题目信息,以及当上述语义识别单元412判断出该图文内容不包含完整题目信息,上述调整单元413调整可穿戴设备的智能主机的任一拍摄模组的拍摄视角至包含该完整题目信息时,从联网题库中搜索与该图文内容相匹配的题目问答信息;The online search unit 414 is used for when the above-mentioned semantic recognition unit 412 judges that the graphic content contains complete title information, and when the above-mentioned semantic recognition unit 412 judges that the graphic content does not contain complete title information, the above-mentioned adjustment unit 413 adjusts the wearable When the shooting angle of any shooting module of the smart host of the device contains the complete question information, search the question and answer information matching the graphic and text content from the online question bank;

第二输出单元415,用于将该题目问答信息输出至与该可穿戴设备连接的学习设备上显示。The second output unit 415 is configured to output the question and answer information to a learning device connected to the wearable device for display.

本申请实施例中,通过实施上述可穿戴设备,能够实现拍照搜题的功能,从而帮助该可穿戴设备的佩戴者进行自主学习,提升学习积极性。In the embodiment of the present application, by implementing the above-mentioned wearable device, the function of taking pictures and searching for questions can be realized, thereby helping the wearer of the wearable device to perform autonomous learning and improving learning enthusiasm.

作为一种可选的实施方式,若上述第一获取单元407所获取该可穿戴设备的佩戴者发出的操作指令表示对上述识别内容进行搜题并批改,则第二提取单元411还可以判别并分别提取上述识别内容中的题目信息和作答信息;接下来,联网搜索单元414从联网题库中搜索与上述题目信息相匹配的题目问答信息;根据该题目问答信息,可穿戴设备可以对上述作答信息进行批改,获得批改结果;在此基础上,第二输出单元415将该批改结果输出至与该可穿戴设备连接的学习设备上显示。As an optional implementation manner, if the operation instruction issued by the wearer of the wearable device obtained by the first obtaining unit 407 indicates that the above identification content is to be searched and corrected, the second obtaining unit 411 can also determine and correct Extract the question information and answer information in the above-mentioned identification content respectively; Next, the online search unit 414 searches the question and answer information matching the above-mentioned question information from the online question bank; according to the question and answer information, the wearable device can search for the above-mentioned answer information Correction is performed to obtain a correction result; on this basis, the second output unit 415 outputs the correction result to a learning device connected to the wearable device for display.

进一步的,若上述批改结果表示上述作答信息存在错误,第二输出单元415还可以在上述学习设备上高亮显示存在错误的作答信息,以方便用户查看。Further, if the correction result indicates that there is an error in the answering information, the second output unit 415 may also highlight the answering information with an error on the learning device, so as to facilitate the user to check.

通过实施可穿戴设备,能够进一步帮助可穿戴设备的佩戴者进行自主学习,培养该佩戴者独立解决问题的能力和积极性。By implementing the wearable device, it can further help the wearer of the wearable device to conduct autonomous learning, and cultivate the wearer's ability and enthusiasm to solve problems independently.

可见,实施图6所描述的可穿戴设备,能够提升利用可穿戴设备进行内容识别的准确率。It can be seen that implementing the wearable device described in FIG. 6 can improve the accuracy of content recognition using the wearable device.

此外,实施图6所描述的可穿戴设备,能够利用可穿戴设备实现搜题功能,从而帮助该可穿戴设备的佩戴者进行自主学习,提升学习积极性。In addition, by implementing the wearable device described in FIG. 6 , the wearable device can be used to realize the function of searching questions, thereby helping the wearer of the wearable device to perform autonomous learning and improving learning enthusiasm.

请参阅图7,图7是本申请实施例公开的又一种可穿戴设备的结构示意图。其中,该可穿戴设备包括智能主机和主机支架,该智能主机在立起来与该主机支架垂直时能够发生360°范围内的任意角度的旋转。如图7所示,该智能主机可以包括:Please refer to FIG. 7 , which is a schematic structural diagram of another wearable device disclosed in an embodiment of the present application. Wherein, the wearable device includes a smart host and a host bracket, and the smart host can rotate at any angle within a range of 360° when the smart host is vertical to the host bracket. As shown in Figure 7, the smart host may include:

存储有可执行程序代码的存储器701;a memory 701 storing executable program code;

与存储器701耦合的处理器702;a processor 702 coupled to the memory 701;

其中,处理器702调用存储器701中存储的可执行程序代码,执行图1~图3任意一种基于可穿戴设备的内容识别方法中的全部或部分步骤。The processor 702 invokes the executable program code stored in the memory 701 to execute all or part of the steps in any of the wearable device-based content identification methods shown in FIGS. 1 to 3 .

此外,本申请实施例进一步公开了一种计算机可读存储介质,其存储用于电子数据交换的计算机程序,其中,该计算机程序使得计算机执行图1~图3任意一种基于可穿戴设备的内容识别方法中的全部或部分步骤。In addition, an embodiment of the present application further discloses a computer-readable storage medium, which stores a computer program for electronic data exchange, wherein the computer program enables a computer to execute any of the wearable device-based content shown in FIGS. 1 to 3 . Identify all or some of the steps in the method.

此外,本申请实施例进一步公开一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机图1~图3任意一种基于可穿戴设备的内容识别方法中的全部或部分步骤。In addition, the embodiment of the present application further discloses a computer program product, when the computer program product runs on a computer, all or part of the steps in any one of the wearable device-based content recognition methods shown in FIG. 1 to FIG. 3 on the computer.

本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质包括只读存储器(Read-Only Memory,ROM)、随机存储器(Random Access Memory,RAM)、可编程只读存储器(Programmable Read-only Memory,PROM)、可擦除可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、一次可编程只读存储器(One-time Programmable Read-Only Memory,OTPROM)、电子抹除式可复写只读存储器(Electrically-Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(CompactDisc Read-Only Memory,CD-ROM)或其他光盘存储器、磁盘存储器、磁带存储器、或者能够用于携带或存储数据的计算机可读的任何其他介质。Those of ordinary skill in the art can understand that all or part of the steps in the various methods of the above embodiments can be completed by instructing relevant hardware through a program, and the program can be stored in a computer-readable storage medium, and the storage medium includes a read-only storage medium. Memory (Read-Only Memory, ROM), Random Access Memory (Random Access Memory, RAM), Programmable Read-only Memory (PROM), Erasable Programmable Read Only Memory (Erasable Programmable Read Only Memory, EPROM), One-time Programmable Read-Only Memory (OTPROM), Electronically-Erasable Programmable Read-Only Memory (EEPROM), CompactDisc Read -Only Memory, CD-ROM) or other optical disk storage, magnetic disk storage, magnetic tape storage, or any other computer-readable medium that can be used to carry or store data.

以上对本申请实施例公开的一种基于可穿戴设备的内容识别方法及可穿戴设备进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。A wearable device-based content recognition method and a wearable device disclosed in the embodiments of the present application are described above in detail, and specific examples are used in this paper to illustrate the principles and implementations of the present invention. The descriptions of the above embodiments are only It is used to help understand the method of the present invention and its core idea; at the same time, for those skilled in the art, according to the idea of the present invention, there will be changes in the specific embodiments and application scope. The contents of the description should not be construed as limiting the present invention.

Claims (12)

1. A content recognition method based on a wearable device, wherein the wearable device comprises a smart host and a host bracket, the smart host can rotate within 360 degrees in any angle when standing up to be perpendicular to the host bracket, and the method comprises the following steps:
detecting touch operation of a user on a screen of the intelligent host when the intelligent host is erected to be vertical to the host bracket; when the touch operation is detected, acquiring a touch area of the touch operation;
when the distance between the touch area and the shooting center point of any shooting module of the intelligent host is larger than a preset threshold value, determining that the touch area is located at the edge of the shooting view angle of any shooting module of the intelligent host;
controlling the intelligent host to rotate to a shooting central point of any shooting module of the intelligent host to be closest to the touch area when the intelligent host is erected to be vertical to the host bracket;
any shooting module of the intelligent host with the shooting central point closest to the touch area is controlled to shoot the touch area, and a shot image is obtained;
and sequentially passing the shot image through a preprocessing model and a content identification model to obtain the identification content of the shot image.
2. The content recognition method according to claim 1, wherein after obtaining the recognition content of the captured image, the method further comprises:
outputting the identification content of the shot image to a screen of the intelligent host for display;
controlling the intelligent host to rotate to the face of the screen facing the wearer of the wearable device, so that the wearer can conveniently view the identification content of the shot image.
3. The content recognition method according to claim 2, wherein after controlling the smart host to rotate to face the screen toward the wearer of the wearable device so as to facilitate the wearer to view the recognition content of the captured image, the method further comprises:
acquiring an operation instruction sent by the wearer;
if the operation instruction indicates that the identification content is read, extracting text content in the identification content;
and acquiring click-to-read content selected by the wearer from the text content;
and broadcasting the point reading content.
4. The content recognition method of claim 3, further comprising:
if the operation instruction indicates that the topic searching is carried out on the identification content, extracting the image-text content in the identification content;
searching question and answer information matched with the image-text content from a networked question bank;
and outputting the question answering information to learning equipment connected with the wearable equipment for display.
5. The method of claim 4, wherein after extracting the teletext content of the identified content and before searching a networked question bank for question and answer information matching the teletext content, the method further comprises:
identifying the semantics of the image-text content, and judging whether the image-text content contains complete title information according to the semantics; if not, adjusting the shooting visual angle of any shooting module of the intelligent host to contain the complete question information, and executing the step of searching question and answer information matched with the image-text content from a networked question bank.
6. A wearable device comprising a smart host and a host stand, wherein the smart host is capable of any angle of rotation within a 360 ° range when standing up perpendicular to the host stand, the smart host comprising:
the touch detection unit is used for detecting touch operation of a user on a screen of the intelligent host when the intelligent host is erected to be vertical to the host bracket; when the touch operation is detected, acquiring a touch area of the touch operation; when the distance between the touch area and the shooting center point of any shooting module of the intelligent host is larger than a preset threshold value, determining that the touch area of the user is located at the edge of the shooting view angle of any shooting module of the intelligent host;
the first control unit is used for controlling the intelligent host to rotate to a position that a shooting central point of any shooting module of the intelligent host is closest to the touch area when the intelligent host is erected to be vertical to the host bracket when the touch detection unit detects that the touch area of a user is located at the edge of a shooting visual angle of any shooting module of the intelligent host;
the second control unit is used for controlling any shooting module of the intelligent host with the shooting central point closest to the touch area to shoot the touch area to obtain a shot image;
and the content identification unit is used for sequentially passing the shot image through the preprocessing model and the content identification model to obtain the identification content of the shot image.
7. The wearable device of claim 6, wherein the smart host further comprises:
the first output unit is used for outputting the identification content of the shot image to the screen of the intelligent host for display after the content identification unit obtains the identification content of the shot image;
and the third control unit is used for controlling the intelligent host to rotate to the face of the screen facing the wearer of the wearable device, so that the wearer can conveniently view the identification content of the shot image.
8. The wearable device of claim 7, wherein the smart host further comprises:
the first obtaining unit is used for obtaining an operation instruction sent by the wearer after the third control unit controls the intelligent host to rotate to the face of the wearer with the screen facing the wearable device so that the wearer can conveniently view the identification content of the shot image;
a first extracting unit, configured to extract text content in the identification content when the operation instruction sent by the wearer and acquired by the first acquiring unit indicates to perform point reading on the identification content;
a second acquisition unit configured to acquire click-to-read content selected by the wearer from the text content;
and the broadcasting unit is used for broadcasting the point-reading content.
9. The wearable device of claim 8, wherein the smart host further comprises:
the second extraction unit is used for extracting the image-text content in the identification content when the operation instruction sent by the wearer and acquired by the first acquisition unit indicates to search the identification content;
the network searching unit is used for searching question and answer information matched with the image-text content from a network question bank;
and the second output unit is used for outputting the question answering information to the learning equipment connected with the wearable equipment for display.
10. The wearable device of claim 9, wherein the smart host further comprises:
the semantic recognition unit is used for recognizing the semantics of the image-text content and judging whether the image-text content contains complete question information according to the semantics before the networked searching unit searches question and answer information matched with the image-text content from a networked question bank after the second extraction unit extracts the image-text content in the recognized content;
and the adjusting unit is used for adjusting the shooting visual angle of any shooting module of the intelligent host to contain the complete topic information when the semantic identification unit judges that the image-text content does not contain the complete topic information, and triggering the networking searching unit to search and contain the image-text content matched topic question-answer information of the complete topic information from the networking topic library.
11. A wearable device comprising a smart host and a host stand, wherein the smart host is capable of any angle of rotation within a 360 ° range when standing up perpendicular to the host stand, the smart host comprising:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the wearable device-based content identification method according to any one of claims 1 to 5.
12. A computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute the wearable-device-based content identification method according to any one of claims 1 to 5.
CN201911088886.9A 2019-11-08 2019-11-08 A wearable device-based content recognition method and wearable device Active CN111182202B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911088886.9A CN111182202B (en) 2019-11-08 2019-11-08 A wearable device-based content recognition method and wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911088886.9A CN111182202B (en) 2019-11-08 2019-11-08 A wearable device-based content recognition method and wearable device

Publications (2)

Publication Number Publication Date
CN111182202A CN111182202A (en) 2020-05-19
CN111182202B true CN111182202B (en) 2022-05-27

Family

ID=70651882

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911088886.9A Active CN111182202B (en) 2019-11-08 2019-11-08 A wearable device-based content recognition method and wearable device

Country Status (1)

Country Link
CN (1) CN111182202B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101783883A (en) * 2009-12-26 2010-07-21 华为终端有限公司 Adjusting method in co-optical-center videography and co-optical-center camera system
CN104217197A (en) * 2014-08-27 2014-12-17 华南理工大学 Touch reading method and device based on visual gestures
CN105611161A (en) * 2015-12-24 2016-05-25 广东欧珀移动通信有限公司 Photographing control method, photographing control device and photographing system
EP3103385A1 (en) * 2015-06-12 2016-12-14 Hill-Rom Services, Inc. Image transmission or recording triggered by bed event
CN110177242A (en) * 2019-04-08 2019-08-27 广东小天才科技有限公司 Video call method based on wearable device and wearable device
CN110174924A (en) * 2018-09-30 2019-08-27 广东小天才科技有限公司 Friend making method based on wearable device and wearable device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103581532B (en) * 2012-07-24 2018-06-05 合硕科技股份有限公司 Method and device for controlling lens signal shooting by using handheld device
CN104793887B (en) * 2015-04-29 2017-08-18 广东欧珀移动通信有限公司 Dual camera control method and device for mobile terminal
CN104822029B (en) * 2015-05-22 2018-11-27 广东欧珀移动通信有限公司 A kind of method, apparatus that control rotating camera rotates and a kind of mobile terminal
CN104954672B (en) * 2015-06-10 2020-06-02 惠州Tcl移动通信有限公司 Manual focusing method of mobile terminal and mobile terminal
CN105120162B (en) * 2015-08-27 2019-04-16 Oppo广东移动通信有限公司 A kind of camera method of controlling rotation and terminal
CN105791675A (en) * 2016-02-26 2016-07-20 广东欧珀移动通信有限公司 Terminal, imaging and interactive control method and device, terminal and system thereof
CN106485758B (en) * 2016-10-31 2023-08-22 成都通甲优博科技有限责任公司 Unmanned aerial vehicle camera calibration device, calibration method and assembly line calibration implementation method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101783883A (en) * 2009-12-26 2010-07-21 华为终端有限公司 Adjusting method in co-optical-center videography and co-optical-center camera system
CN104217197A (en) * 2014-08-27 2014-12-17 华南理工大学 Touch reading method and device based on visual gestures
EP3103385A1 (en) * 2015-06-12 2016-12-14 Hill-Rom Services, Inc. Image transmission or recording triggered by bed event
CN105611161A (en) * 2015-12-24 2016-05-25 广东欧珀移动通信有限公司 Photographing control method, photographing control device and photographing system
CN110174924A (en) * 2018-09-30 2019-08-27 广东小天才科技有限公司 Friend making method based on wearable device and wearable device
CN110177242A (en) * 2019-04-08 2019-08-27 广东小天才科技有限公司 Video call method based on wearable device and wearable device

Also Published As

Publication number Publication date
CN111182202A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
CN108665742B (en) Method and device for reading through reading device
CN108021320B (en) Electronic equipment and item searching method thereof
CN109597943B (en) Learning content recommendation method based on scene and learning equipment
CN107451127B (en) Word translation method and system based on image and mobile device
CN111156441B (en) Desk lamp, system and method for assisting learning
CN108665764B (en) Method and device for reading through reading device
CN113052169A (en) Video subtitle recognition method, device, medium, and electronic device
CN105912717A (en) Image-based information searching method and device
US12041313B2 (en) Data processing method and apparatus, device, and medium
CN107330040B (en) Learning question searching method and system
CN107977146B (en) A mask-based topic search method and electronic device
CN111026949A (en) A method and system for searching questions based on electronic equipment
CN112163513A (en) Information selection method, system, device, electronic device and storage medium
CN106202360B (en) Test question searching method and device
CN113272873A (en) Method and apparatus for augmented reality
CN111079737B (en) Text tilt correction method and electronic device
CN111739534A (en) A processing method, device, electronic device and storage medium for assisting speech recognition
CN111079726B (en) Image processing method and electronic equipment
CN111182202B (en) A wearable device-based content recognition method and wearable device
CN105760367B (en) Real-time word translation method and device
CN110795918A (en) Method, device and equipment for determining reading position
CN112837398A (en) Method, device, electronic device and storage medium for text annotation
CN111753604A (en) A point reading method based on learning equipment and learning equipment
CN106294659B (en) Method and device for searching questions based on intelligent terminal
CN111027353A (en) Search content extraction method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant