[go: up one dir, main page]

CN110084979A - Man-machine interaction method, device and controller and interactive device - Google Patents

Man-machine interaction method, device and controller and interactive device Download PDF

Info

Publication number
CN110084979A
CN110084979A CN201910329020.6A CN201910329020A CN110084979A CN 110084979 A CN110084979 A CN 110084979A CN 201910329020 A CN201910329020 A CN 201910329020A CN 110084979 A CN110084979 A CN 110084979A
Authority
CN
China
Prior art keywords
virtual target
target area
area
target item
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910329020.6A
Other languages
Chinese (zh)
Other versions
CN110084979B (en
Inventor
唐承佩
陈崇雨
黄寒露
陈添水
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DMAI Guangzhou Co Ltd
Original Assignee
DMAI Guangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DMAI Guangzhou Co Ltd filed Critical DMAI Guangzhou Co Ltd
Priority to CN201910329020.6A priority Critical patent/CN110084979B/en
Publication of CN110084979A publication Critical patent/CN110084979A/en
Application granted granted Critical
Publication of CN110084979B publication Critical patent/CN110084979B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F17/00Coin-freed apparatus for hiring articles; Coin-freed facilities or services
    • G07F17/32Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
    • G07F17/326Game play aspects of gaming systems
    • G07F17/3262Player actions which determine the course of the game, e.g. selecting a prize to be won, outcome to be achieved, game to be played

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明提供了一种人机交互方法、装置以及控制器和交互设备,用于控制目标物品向目标区域移动,交互设备包括显示器和可收纳目标物品的投放组件,目标物品和目标区域分别在显示器上映射有虚拟目标物品和虚拟目标区域,在虚拟目标物品和虚拟目标区域之间具有多个待消除区域,方法包括:分别获取待消除区域内的预设表情和用户脸部表情;判断用户脸部表情与待消除区域内预设表情匹配度是否大于预设阈值;当匹配度大于预设阈值时,消除待消除区域,并控制虚拟目标物品向虚拟目标区域移动。将投放组件能够投放的目标物品与显示器中的虚拟目标物品相对应,能够使得用户与该交互设备有更强的交互性,目标物品与虚拟目标物品实现联动。

The present invention provides a human-computer interaction method, device, controller, and interactive device for controlling the movement of a target item to a target area. The interactive device includes a display and a delivery component that can accommodate the target item. A virtual target item and a virtual target area are mapped on it, and there are multiple areas to be eliminated between the virtual target item and the virtual target area. The method includes: respectively obtaining preset expressions and user facial expressions in the area to be eliminated; Whether the matching degree of facial expressions and the preset expressions in the area to be eliminated is greater than the preset threshold; when the matching degree is greater than the preset threshold, the area to be eliminated is eliminated, and the virtual target item is controlled to move to the virtual target area. Corresponding the target items that can be dropped by the delivery component to the virtual target items in the display can enable the user to have stronger interaction with the interactive device, and the target items and the virtual target items can be linked.

Description

人机交互方法、装置以及控制器和交互设备Human-computer interaction method, device, controller, and interaction device

技术领域technical field

本发明涉及体感游戏的技术领域,具体涉及一种人机交互方法、装置以及控制器和交互设备。The invention relates to the technical field of somatosensory games, in particular to a human-computer interaction method and device, as well as a controller and an interactive device.

背景技术Background technique

人机交互技术(英文全称为:Human-Computer Interaction Techniques)是指通过计算机输入、输出设备,以有效的方式实现人与计算机对话的技术,目前,有很多交互设备供人学习和娱乐,例如,在电子游乐场以及各大商圈中存在着各种大型的娱乐装置,通常通过摇杆、按钮或触屏等专用输入设备来实现人机交互,并通过设置实物奖励来提升用户参与的积极性。其中,代表性的娱乐装置包括夹娃娃机和口红机。夹娃娃机中用户通过摇杆和按钮对机柜内部的机械抓臂进行控制,从而实现实物抓取的目的,此种人机交互操作繁琐;口红机中用户通过点击触屏来选择所需要的实物并完成对应的操作,该操作与奖励物品本身的关联程度较低,人机交互效果较差。Human-Computer Interaction Technology (English full name: Human-Computer Interaction Techniques) refers to the technology that realizes the dialogue between human and computer in an effective way through computer input and output devices. At present, there are many interactive devices for people to learn and entertain, for example, There are various large-scale entertainment devices in electronic playgrounds and major business districts. Human-computer interaction is usually realized through special input devices such as joysticks, buttons or touch screens, and the enthusiasm of users to participate is enhanced by setting physical rewards. Among them, representative entertainment devices include claw machines and lipstick machines. In the claw machine, the user controls the mechanical grabbing arm inside the cabinet through the rocker and buttons, so as to achieve the purpose of grasping the object. This kind of human-computer interaction operation is cumbersome; in the lipstick machine, the user selects the desired object by clicking the touch screen And complete the corresponding operation, the operation has a low degree of association with the reward item itself, and the human-computer interaction effect is poor.

发明内容SUMMARY OF THE INVENTION

本发明的目的在于提供一种人机交互方法、装置以及控制器和交互设备,以解决现有技术中存在的上述至少之一的技术问题。The object of the present invention is to provide a human-computer interaction method and device, as well as a controller and an interaction device, so as to solve at least one of the above-mentioned technical problems existing in the prior art.

为实现上述目的,本发明采用的技术方案是:提供一种人机交互方法,适用于交互设备,用于控制目标物品向目标区域移动,所述交互设备包括显示器和可收纳目标物品的投放组件,所述目标物品和所述目标区域分别在所述显示器上映射有虚拟目标物品和虚拟目标区域,在所述虚拟目标物品和所述虚拟目标区域之间具有多个待消除区域,所述方法包括:In order to achieve the above object, the technical solution adopted by the present invention is to provide a human-computer interaction method, which is suitable for an interactive device and is used to control the movement of the target item to the target area. The interactive device includes a display and a delivery component that can accommodate the target item , the target item and the target area are respectively mapped with a virtual target item and a virtual target area on the display, and there are a plurality of areas to be eliminated between the virtual target item and the virtual target area, the method include:

分别获取待消除区域内的预设表情和用户脸部表情;Respectively obtain preset expressions and user facial expressions in the area to be eliminated;

判断所述用户脸部表情与待消除区域内预设表情匹配度是否大于预设阈值;Judging whether the matching degree between the user's facial expression and the preset expression in the area to be eliminated is greater than a preset threshold;

当所述匹配度大于预设阈值时,消除所述待消除区域,并控制所述虚拟目标物品向所述虚拟目标区域移动。When the matching degree is greater than a preset threshold, the area to be eliminated is eliminated, and the virtual target item is controlled to move to the virtual target area.

进一步地,在所述获取待消除区域内的预设表情之前包括:Further, before the acquisition of preset expressions in the area to be eliminated includes:

获取开始指令;get start command;

根据开始指令设定计时时钟。Set the timing clock according to the start command.

进一步地,所述当所述匹配度大于预设阈值时,消除所述待消除区域,并控制所述虚拟目标物品向所述虚拟目标区域移动并重复所述分别获取待消除区域内的预设表情和用户脸部表情,判断所述用户脸部表情与待消除区域内预设表情匹配度是否大于预设阈值的步骤,直至计时结束或所述虚拟目标物品移动至所述虚拟目标区域。Further, when the matching degree is greater than a preset threshold, eliminate the area to be eliminated, control the virtual target item to move to the virtual target area, and repeat the acquisition of preset values in the area to be eliminated. expression and user's facial expression, the step of judging whether the matching degree between the user's facial expression and the preset expression in the area to be eliminated is greater than a preset threshold, until the timing ends or the virtual target item moves to the virtual target area.

进一步地,所述当所述匹配度小于预设阈值时,输出用于表征匹配不成功的提示信号,并重复所述分别获取待消除区域内的预设表情和用户脸部表情,判断所述用户脸部表情与待消除区域内预设表情匹配度是否大于预设阈值的步骤,直至所述匹配度大于预设阈值或计时结束。Further, when the matching degree is less than the preset threshold, output a prompt signal for indicating that the matching is unsuccessful, and repeat the acquisition of the preset expression and the user's facial expression in the area to be eliminated, and judge the The step of whether the matching degree of the user's facial expression and the preset expression in the area to be eliminated is greater than the preset threshold, until the matching degree is greater than the preset threshold or the timing is over.

进一步地,在消除所述待消除区域,并控制所述虚拟目标物品向所述虚拟目标区域移动之后还包括:Further, after eliminating the area to be eliminated and controlling the movement of the virtual target item to the virtual target area, the method further includes:

判断所述计时时钟结束之前所述虚拟目标物品是否到达所述虚拟目标区域;judging whether the virtual target item reaches the virtual target area before the end of the timing clock;

当所述虚拟目标物品达到虚拟目标区域,控制所述目标物品移动至所述目标区域。When the virtual target item reaches the virtual target area, the target item is controlled to move to the target area.

进一步地,当所述虚拟目标物品未到达所述虚拟目标区域,控制所述目标物品回到初始位置。Further, when the virtual target item does not reach the virtual target area, the target item is controlled to return to the initial position.

进一步地,判断所述用户脸部表情与待消除区域内预设表情匹配度是否大于预设阈值包括:Further, judging whether the matching degree between the user's facial expression and the preset expression in the area to be eliminated is greater than a preset threshold includes:

获取所述预设表情的特征点集;Obtaining the feature point set of the preset expression;

在所述用户面部表情中提取人脸特征点集;Extracting a face feature point set in the user's facial expression;

计算所述人脸特征点集和预设表情的特征点集的匹配度。Computing the matching degree between the feature point set of the human face and the feature point set of preset expressions.

本发明实施例提供了一种人机交互装置,适用于交互设备,用于控制目标物品向目标区域移动,所述交互设备包括显示模块和可收纳目标物品的投放模块,所述目标物品和所述目标区域分别在所述显示模块上映射有虚拟目标物品和虚拟目标区域,在所述虚拟目标物品和所述虚拟目标区域之间具有多个待消除区域,所述装置包括:An embodiment of the present invention provides a human-computer interaction device, which is suitable for an interactive device and is used to control the movement of a target item to a target area. The interactive device includes a display module and a delivery module that can accommodate the target item. The target item and the The target area is respectively mapped with a virtual target item and a virtual target area on the display module, and there are multiple areas to be eliminated between the virtual target item and the virtual target area, and the device includes:

获取模块,用于分别获取待消除区域内的预设表情和用户脸部表情;An acquisition module, configured to acquire preset expressions and user facial expressions in the area to be eliminated respectively;

判断模块,用于判断所述用户脸部表情与待消除区域内预设表情匹配度是否大于预设阈值;以及A judging module, configured to judge whether the matching degree between the facial expression of the user and the preset expression in the area to be eliminated is greater than a preset threshold; and

控制模块,当所述匹配度大于预设阈值时,消除所述待消除区域,并控制虚拟目标物品向虚拟目标区域移动。A control module, when the matching degree is greater than a preset threshold, eliminates the area to be eliminated, and controls the virtual target item to move to the virtual target area.

本发明实施例提供了一种控制器,其特征在于,包括:An embodiment of the present invention provides a controller, which is characterized in that it includes:

至少一个处理器;以及at least one processor; and

与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器执行权利要求1-7中任一项所述的人机交互方法。A memory connected in communication with the at least one processor; wherein the memory stores instructions executable by the one processor, and the instructions are executed by the at least one processor so that the at least one processor Executing the human-computer interaction method described in any one of claims 1-7.

本发明实施例提供了一种交互设备,其特征在于,包括:An embodiment of the present invention provides an interactive device, which is characterized in that it includes:

如上述任一实施例中的控制器;As the controller in any of the above-mentioned embodiments;

显示器,与所述控制器连接,所述显示器的一侧设有图像采集装置;A display connected to the controller, one side of the display is provided with an image acquisition device;

投放组件,与所述控制器连接,用于收纳目标物品,并在所述控制器的控制下将所述目标物品投放至目标区域图像采集装置。The delivery component is connected with the controller and is used to store the target item, and deliver the target item to the target area image acquisition device under the control of the controller.

进一步地,还包括指令输入器和指令输出器,所述指令输入器和所述指令输出器均与所述控制器相连接,所述指令输入器包括麦克风、摇杆、触屏或键盘的任一种或多种;所述指令输出器包括照明灯或扬声器的任一种或多种。Further, it also includes an instruction input device and an instruction output device, the instruction input device and the instruction output device are both connected to the controller, and the instruction input device includes any of a microphone, a rocker, a touch screen or a keyboard. One or more; the instruction output device includes any one or more of lighting lamps or loudspeakers.

本发明实施例提供的人机交互方法、装置以及控制器和交互设备,在交互设备中将投放组件能够投放的目标物品与显示器中的虚拟目标物品相对应,通过将待消除区域消除,能够缩短虚拟目标物品和虚拟目标区域之间距离,继而可以将虚拟目标物品向虚拟目标区域移动,从而控制投放组件将目标物品移动至目标区域,在对消除区域消除时,通过将用户脸部表情和预设表情作对比并根据匹配度消除待消除区域,其区别于传统的消除方式,能够使得用户与该交互设备有更强的交互性,目标物品与虚拟目标物品实现联动,从而能够增强目标物品和用户之间的交互性,更具有可玩性且人机交互更紧密。In the human-computer interaction method, device, controller, and interactive device provided by the embodiments of the present invention, in the interactive device, the target items that can be dropped by the delivery component are corresponding to the virtual target items in the display, and the area to be eliminated can be shortened by eliminating the area to be eliminated. The distance between the virtual target item and the virtual target area can then be used to move the virtual target item to the virtual target area, thereby controlling the delivery component to move the target item to the target area. When eliminating the elimination area, the user’s facial expression and preset Set the expression for comparison and eliminate the area to be eliminated according to the matching degree. It is different from the traditional elimination method and can enable the user to have stronger interaction with the interactive device. The target item and the virtual target item are linked to enhance the target item and The interaction between users is more playable and the human-computer interaction is closer.

附图说明Description of drawings

为了更清楚地说明本发明具体实施方式或现有技术中的技术方案,下面将对具体实施方式或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the specific implementation of the present invention or the technical solutions in the prior art, the following will briefly introduce the accompanying drawings that need to be used in the specific implementation or description of the prior art. Obviously, the accompanying drawings in the following description The drawings show some implementations of the present invention, and those skilled in the art can obtain other drawings based on these drawings without any creative work.

图1为本发明实施例提供的人机交互方法的示意图;FIG. 1 is a schematic diagram of a human-computer interaction method provided by an embodiment of the present invention;

图2为本发明实施例提供的另一实施例的人机交互方法的示意图;FIG. 2 is a schematic diagram of a human-computer interaction method according to another embodiment provided by an embodiment of the present invention;

图3为本发明实施例提供的人机交互装置的示意图;FIG. 3 is a schematic diagram of a human-computer interaction device provided by an embodiment of the present invention;

图4为本发明实施例提供的交互设备的示意图。Fig. 4 is a schematic diagram of an interaction device provided by an embodiment of the present invention.

附图标记说明:Description of reference numbers:

1、显示器;2、投放组件;3、图像采集装置;4、控制器;5、指令输入器;6、指令输出器;41、处理器;42、存储器;10、获取模块;20、判断模块;30、控制模块;40、显示模块;50、投放模块。1. Display; 2. Putting component; 3. Image acquisition device; 4. Controller; 5. Instruction input device; 6. Instruction output device; 41. Processor; 42. Memory; 10. Obtaining module; ; 30, the control module; 40, the display module; 50, the delivery module.

具体实施方式Detailed ways

下面将结合附图对本发明的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions of the present invention will be clearly and completely described below in conjunction with the accompanying drawings. Apparently, the described embodiments are some of the embodiments of the present invention, but not all of them. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.

在本发明的描述中,需要说明的是,术语“中心”、“上”、“下”、“左”、“右”、“竖直”、“水平”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。此外,术语“第一”、“第二”、“第三”仅用于描述目的,而不能理解为指示或暗示相对重要性。In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer" etc. The indicated orientation or positional relationship is based on the orientation or positional relationship shown in the drawings, and is only for the convenience of describing the present invention and simplifying the description, rather than indicating or implying that the referred device or element must have a specific orientation, or in a specific orientation. construction and operation, therefore, should not be construed as limiting the invention. Furthermore, the terms "first", "second", and "third" are used for descriptive purposes only and should not be construed to indicate or imply relative importance.

在本发明的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通。对于本领域的普通技术人员而言,可以具体情况理解上述术语在本发明中的具体含义。In the description of the present invention, it should be noted that the terms "installed", "connected" and "connected" should be understood in a broad sense, unless otherwise expressly specified and limited, for example, it may be a fixed connection or a detachable connection Connection, or integral connection; can be mechanical connection, can also be electrical connection; can be directly connected, can also be indirectly connected through an intermediate medium, can be internal communication between two elements. For those of ordinary skill in the art, the specific meanings of the above terms in the present invention can be understood in specific situations.

此外,下面所描述的本发明不同实施方式中所涉及的技术特征只要彼此之间未构成冲突就可以相互结合。In addition, the technical features involved in the different embodiments of the present invention described below can be combined with each other as long as they do not conflict with each other.

实施例1Example 1

请一并参阅图1及图2,现对本发明提供的人机交互方法进行说明。所述人机交互方法,适用于交互设备,用于控制目标物品向目标区域移动,所述交互设备包括显示器1和可收纳目标物品的投放组件2,所述目标物品和所述目标区域分别在所述显示器1上映射有虚拟目标物品和虚拟目标区域,在所述虚拟目标物品和所述虚拟目标区域之间具有多个待消除区域,具体的,虚拟目标物品和虚拟目标区域均直接显示在显示器1上,目标物品存放于投放组件2的内部,且投放组件2能够将目标物品移动至目标区域,在本实施例中,目标物品和目标区域之间的关系同样也在显示器1上映射为虚拟目标物品与虚拟目标区域的关系,具体的,虚拟目标物品在向虚拟目标区域移动的过程中,对应的目标物品与虚拟目标物品联动,向目标区域移动,或者,虚拟目标物品在移动到虚拟目标区域时,对应的目标物品移动至目标区域。其中,目标物品一般为鼓励性奖品,移动至目标区域即为投放至交互设备外侧可以供用户拿取,其中投放组件2一般包括舵机、减速器和控制电路等硬件,其可以控制目标物品的移动、回收及发放。Please refer to FIG. 1 and FIG. 2 together, and now the human-computer interaction method provided by the present invention will be described. The human-computer interaction method is applicable to an interactive device for controlling the target item to move to the target area. The interactive device includes a display 1 and a delivery component 2 capable of accommodating the target item. The target item and the target area are respectively in A virtual target item and a virtual target area are mapped on the display 1, and there are multiple areas to be eliminated between the virtual target item and the virtual target area. Specifically, the virtual target item and the virtual target area are directly displayed on the On the display 1, the target item is stored inside the delivery component 2, and the delivery component 2 can move the target item to the target area. In this embodiment, the relationship between the target item and the target area is also mapped on the display 1 as The relationship between the virtual target item and the virtual target area. Specifically, when the virtual target item is moving to the virtual target area, the corresponding target item is linked with the virtual target item to move to the target area, or the virtual target item is moving to the virtual target area. When the target area is reached, the corresponding target item moves to the target area. Among them, the target item is generally an incentive prize, and when it moves to the target area, it is put on the outside of the interactive device for the user to pick up. The delivery component 2 generally includes hardware such as a steering gear, a reducer, and a control circuit, which can control the movement of the target item. Move, collect and distribute.

所述方法包括如下步骤:The method comprises the steps of:

S101.分别获取待消除区域内的预设表情和用户脸部表情;待消除区域内预设有预设表情,需要用户模仿该预设表情并将用户模仿的脸部表情与预设表情进行对比直至该用户模仿的脸部表情与预设表情相匹配。对于预设表情,其可以为选择该待消除区域后随机生成的预设表情,其还可以为待消除区域内本身即显示出的预设表情,即用户可以根据需要和自身情况选取待消除区域。S101. Obtain the preset expression and the user's facial expression in the area to be eliminated respectively; the preset expression is preset in the area to be eliminated, and the user is required to imitate the preset expression and compare the user's imitated facial expression with the preset expression Until the facial expression imitated by the user matches the preset expression. For the preset expression, it can be a preset expression randomly generated after selecting the area to be eliminated, and it can also be a preset expression displayed in the area to be eliminated, that is, the user can select the area to be eliminated according to needs and their own conditions .

S102.判断所述用户脸部表情与待消除区域内预设表情匹配度是否大于预设阈值;预设阈值一般用来判断用户脸部表情和待消除区域内预设表情的匹配度,且预设的阈值可以根据需要进行调整,当所述匹配度大于预设阈值时,进入步骤S103。当所述匹配度小于预设阈值时,返回步骤S101。S102. Determine whether the matching degree of the user's facial expression and the preset expression in the area to be eliminated is greater than a preset threshold; the preset threshold is generally used to judge the matching degree of the user's facial expression and the preset expression in the area to be eliminated, and the predetermined The set threshold can be adjusted as needed, and when the matching degree is greater than the preset threshold, go to step S103. When the matching degree is less than the preset threshold, return to step S101.

S103.消除所述待消除区域,并控制所述虚拟目标物品向所述虚拟目标区域移动。将待消除区域消除后,则虚拟目标物品判断是否可以朝向虚拟目标区域,若可以朝向虚拟目标区域移动,则虚拟目标物品朝向虚拟目标区域移动,若不可以朝向虚拟目标区域移动,则虚拟目标物品位置不变。S103. Eliminate the area to be eliminated, and control the virtual target item to move to the virtual target area. After the area to be eliminated is eliminated, the virtual target item judges whether it can move towards the virtual target area. If it can move towards the virtual target area, the virtual target item moves towards the virtual target area. If it cannot move towards the virtual target area, the virtual target item The position is unchanged.

本发明提供的人机交互方法,与现有技术相比,在交互设备中将投放组件2能够投放的目标物品与显示器1中的虚拟目标物品相对应,通过将待消除区域消除,能够缩短虚拟目标物品和虚拟目标区域之间距离,继而可以将虚拟目标物品向虚拟目标区域移动,从而控制投放组件2将目标物品移动至目标区域,在对消除区域消除时,通过将用户脸部表情和预设表情作对比并根据匹配度消除待消除区域,其区别于传统的消除方式,能够使得用户与该交互设备有更强的交互性,目标物品与虚拟目标物品实现联动,从而能够增强目标物品和用户之间的交互性,更具有可玩性且人机交互更紧密。In the human-computer interaction method provided by the present invention, compared with the prior art, in the interactive device, the target items that can be dropped by the delivery component 2 are corresponding to the virtual target items in the display 1, and the virtual target items can be shortened by eliminating the area to be eliminated. The distance between the target item and the virtual target area can then move the virtual target item to the virtual target area, thereby controlling the delivery component 2 to move the target item to the target area. When the elimination area is eliminated, the user's facial expression and preset Set the expression for comparison and eliminate the area to be eliminated according to the matching degree. It is different from the traditional elimination method and can enable the user to have stronger interaction with the interactive device. The target item and the virtual target item are linked to enhance the target item and The interaction between users is more playable and the human-computer interaction is closer.

作为可选的实施例,步骤S103可以包括:在消除所述待消除区域后,虚拟目标物品移动至待消除区域,选取新的待消除区域,并重复步骤S101和步骤S102,直至将虚拟目标物品移动至所述虚拟目标区域。在本实施例中,对于新的待消除区域的选取可以基于预设路径,即虚拟目标物品移动至虚拟目标区域所需经过的待消除区域是提前设定好,只需将虚拟目标物品移动至虚拟目标区域所经过的预设路径上的待消除区域全部消除,虚拟目标物品即可移动至虚拟目标区域。作为另一种新的待消除区域的选取方式,可以通过接受用户的选取指令,并根据用户的选取指令确定下一待消除区域,例如,可以通过获取用户对显示器1的触控事件,所称触控事件可以表征用户选择待消除区域,并根据触控事件确定用户所选择的下一待消除区域,并重复步骤S101-S102,直至虚拟目标物品移动至虚拟目标区域。As an optional embodiment, step S103 may include: after eliminating the area to be eliminated, move the virtual target item to the area to be eliminated, select a new area to be eliminated, and repeat steps S101 and S102 until the virtual target item Move to the virtual target area. In this embodiment, the selection of the new area to be eliminated can be based on a preset path, that is, the area to be eliminated that the virtual target item needs to pass through to move to the virtual target area is set in advance, and only the virtual target item needs to be moved to All the areas to be eliminated on the preset path passed by the virtual target area are eliminated, and the virtual target item can be moved to the virtual target area. As another new way to select the region to be eliminated, the next region to be eliminated can be determined by accepting the user's selection instruction, for example, by acquiring the user's touch event on the display 1, the so-called The touch event may indicate that the user selects an area to be deleted, and the next area to be deleted selected by the user is determined according to the touch event, and steps S101-S102 are repeated until the virtual target item moves to the virtual target area.

作为另一可选的实施方式,请一并参阅图1及图2,在所述获取待消除区域内的预设表情之前还可以包括:获取开始指令并根据开始指令设定计时时钟。As another optional implementation manner, please refer to FIG. 1 and FIG. 2 together. Before acquiring the preset expression in the area to be eliminated, the method may further include: acquiring a start instruction and setting a timing clock according to the start instruction.

所述当所述匹配度大于预设阈值时,消除所述待消除区域,并控制所述虚拟目标物品向所述虚拟目标区域移动并重复S101至S103步骤,直至计时结束或所述虚拟目标物品移动至所述虚拟目标区域。When the matching degree is greater than the preset threshold, eliminate the area to be eliminated, control the virtual target item to move to the virtual target area, and repeat steps S101 to S103 until the timing ends or the virtual target item Move to the virtual target area.

所述当所述匹配度小于预设阈值时,输出用于表征匹配不成功的提示信号,并重复S101至S102步骤,直至所述匹配度大于预设阈值时进入步骤S103,或计时结束时结束人机交互。When the matching degree is less than the preset threshold, output a prompt signal indicating that the matching is unsuccessful, and repeat steps S101 to S102, until the matching degree is greater than the preset threshold, enter step S103, or end when the timing ends human-computer interaction.

具体的,开始指令由该交互设备的显示器1上显示出虚拟目标物品和虚拟目标区域时即自动发出开始指令,或者开始指令由用户控制显示器1发出开始指令,当开始指令发出后则计时时钟开始计时操作。计时操作可以为正序计时,即计时从零开始计时直至时间增长至预设的时间;计时操作还可以为倒序计时,即计时时从预设的时间开始倒序计时直至时间减小为零,此时可以对整个游戏或程序执行时间进行控制。计时操作还可以为正序计时,即计时从零开始计,直至虚拟目标进入虚拟目标区域时停止计时,此时可以对游戏、方法或程序所进行的时间进行统计。Specifically, the start command is automatically issued when the virtual target item and the virtual target area are displayed on the display 1 of the interactive device, or the start command is issued by the user controlling the display 1, and the timing clock starts when the start command is sent. timing operation. The timing operation can be forward timing, that is, the timing starts from zero until the time increases to the preset time; the timing operation can also be countdown, that is, the timing starts counting down from the preset time until the time decreases to zero. You can control the execution time of the entire game or program. The timing operation can also be a positive sequence timing, that is, the timing starts from zero and stops when the virtual target enters the virtual target area. At this time, the time of the game, method or program can be counted.

进一步地,请参阅图1及图2,作为本发明提供的人机交互方法的一种具体实施方式,在消除所述待消除区域,并控制所述虚拟目标物品向所述虚拟目标区域移动之后还包括:Further, please refer to Fig. 1 and Fig. 2, as a specific implementation of the human-computer interaction method provided by the present invention, after eliminating the area to be eliminated and controlling the movement of the virtual target item to the virtual target area Also includes:

判断所述计时时钟结束之前所述虚拟目标物品是否到达所述虚拟目标区域;judging whether the virtual target item reaches the virtual target area before the end of the timing clock;

当所述虚拟目标物品达到虚拟目标区域,控制所述目标物品移动至所述目标区域;当所述虚拟目标物品未到达所述虚拟目标区域,控制所述目标物品回到初始位置。When the virtual target item reaches a virtual target area, the target item is controlled to move to the target area; when the virtual target item does not reach the virtual target area, the target item is controlled to return to an initial position.

具体的,虚拟目标物品是否到达所述虚拟目标区域的判断是依据判断虚拟目标物品和虚拟目标区域之间的距离,其判断时间可以为所述虚拟目标物品向所述虚拟目标区域移动之后以及在计时时钟结束之前进行判断,当虚拟目标物品移动至虚拟目标区域时,目标物品移动至目标区域,此时目标物品可以随着虚拟目标物品的移动而发生移动,或者目标物品在虚拟目标物品移动至虚拟目标区域时直接移动至目标区域。在计时时钟结束时,虚拟目标物品仍未到达虚拟目标区域,此时则该游戏失败,此时投放组件2控制目标物品回到初始位置,即将目标物品重新回收,游戏回复至初始状态。Specifically, the judgment of whether the virtual target item reaches the virtual target area is based on judging the distance between the virtual target item and the virtual target area, and the judgment time can be after the virtual target item moves to the virtual target area and after Judgment is made before the end of the timing clock. When the virtual target item moves to the virtual target area, the target item moves to the target area. At this time, the target item can move with the movement of the virtual target item, or the target item moves to Move directly to the target area when virtualizing the target area. At the end of the timing clock, if the virtual target item has not yet arrived at the virtual target area, then the game fails. At this time, the throwing component 2 controls the target item to get back to the initial position, and the target item is reclaimed, and the game returns to the initial state.

下面对如何判断所述用户脸部表情与待消除区域内预设表情匹配度是否大于预设阈值的方法进行介绍,作为一种可选的实施方式,其方法主要包括以下步骤The method for judging whether the matching degree between the user's facial expression and the preset expression in the area to be eliminated is greater than the preset threshold is introduced below. As an optional implementation, the method mainly includes the following steps

首先,获取所述预设表情的特征点集;具体的,从预设的表情的脸部选取特定的特征,例如眼部特征、嘴部特征或脸颊特征等,并将上述特征形成特征点集,特征点集的确定可以在预设表情预设时即确定,或者特征点集可以在选取到特定的预设表情后再对该预设表情的特征点集进行获取。First, obtain the feature point set of the preset expression; specifically, select specific features from the face of the preset expression, such as eye features, mouth features or cheek features, and form the above features into a feature point set , the feature point set can be determined when the preset expression is preset, or the feature point set can be obtained after a specific preset expression is selected.

其次,在所述用户面部表情中提取人脸特征点集;具体的,用户面部表情由图像采集装置3采集后再对图像采集装置3采集到的用户面部表情进行分析,提取人脸特征点与提取的预设表情的脸部特征点是一一对应的。Secondly, extract the face feature point set in the user's facial expression; Specifically, the user's facial expression is collected by the image acquisition device 3 and then the user's facial expression collected by the image acquisition device 3 is analyzed, and the facial feature point and the facial feature point are extracted. The facial feature points of the extracted preset expressions are in one-to-one correspondence.

最后,计算所述人脸特征点集和预设表情的特征点集的匹配度,具体的,将提取的人脸特征点与该预设表情的脸部特征点进行一一对比,最后再将人脸特征点集和预设表情的特征点集进行对比,从而能够得到对比的结果。Finally, calculate the matching degree of the feature point set of the described human face and the feature point set of the preset expression, specifically, compare the extracted human face feature point with the facial feature point of the preset expression one by one, and finally The feature point set of the human face is compared with the feature point set of the preset expression, so that a comparison result can be obtained.

实施例2Example 2

作为本发明的另一实施例,请一并参阅图1及图2,本实施例与实施例1的区别在于以下部分,现对本实施例提供的人机交互方法进行说明。As another embodiment of the present invention, please refer to FIG. 1 and FIG. 2 together. The difference between this embodiment and Embodiment 1 lies in the following parts. Now, the human-computer interaction method provided by this embodiment will be described.

所述人机交互方法,适用于交互设备,用于控制目标物品向目标区域移动,所述交互设备包括显示器1和可收纳目标物品的投放组件2,所述目标物品和所述目标区域分别在所述显示器1上映射有虚拟目标物品和虚拟目标区域,在所述虚拟目标物品和所述虚拟目标区域之间具有多个待消除区域,具体的,虚拟目标物品和虚拟目标区域均直接显示在显示器1上,目标物品存放于投放组件2的内部,且投放组件2能够将目标物品移动至目标区域,在本实施例中,目标物品和目标区域之间的关系同样也在显示器1上映射为虚拟目标物品与虚拟目标区域的关系,具体的,虚拟目标物品在向虚拟目标区域移动的过程中,对应的目标物品与虚拟目标物品联动,向目标区域移动,或者,虚拟目标物品在移动到虚拟目标区域时,对应的目标物品移动至目标区域。其中,目标物品一般为鼓励性奖品,移动至目标区域即为投放至交互设备外侧可以供用户拿取,其中投放组件2一般包括舵机、减速器和控制电路等硬件,其可以控制目标物品的移动、回收及发放。The human-computer interaction method is applicable to an interactive device for controlling the target item to move to the target area. The interactive device includes a display 1 and a delivery component 2 capable of accommodating the target item. The target item and the target area are respectively in A virtual target item and a virtual target area are mapped on the display 1, and there are multiple areas to be eliminated between the virtual target item and the virtual target area. Specifically, the virtual target item and the virtual target area are directly displayed on the On the display 1, the target item is stored inside the delivery component 2, and the delivery component 2 can move the target item to the target area. In this embodiment, the relationship between the target item and the target area is also mapped on the display 1 as The relationship between the virtual target item and the virtual target area. Specifically, when the virtual target item is moving to the virtual target area, the corresponding target item is linked with the virtual target item to move to the target area, or the virtual target item is moving to the virtual target area. When the target area is reached, the corresponding target item moves to the target area. Among them, the target item is generally an incentive prize, and when it moves to the target area, it is put on the outside of the interactive device for the user to pick up. The delivery component 2 generally includes hardware such as a steering gear, a reducer, and a control circuit, which can control the movement of the target item. Move, collect and distribute.

所述方法包括如下步骤:The method comprises the steps of:

S201.分别获取待消除区域内的预设表情和用户脸部表情;待消除区域内预设有预设表情,需要用户模仿该预设表情并将用户模仿的脸部表情与预设表情进行对比直至该用户模仿的脸部表情与预设表情相匹配。对于预设表情,其可以为选择该待消除区域后随机生成的预设表情,其还可以为待消除区域内本身即显示出的预设表情,即用户可以根据需要和自身情况选取待消除区域。S201. Obtain the preset expression and the user's facial expression in the area to be eliminated respectively; the preset expression is preset in the area to be eliminated, and the user is required to imitate the preset expression and compare the user's imitated facial expression with the preset expression Until the facial expression imitated by the user matches the preset expression. For the preset expression, it can be a preset expression randomly generated after selecting the area to be eliminated, and it can also be a preset expression displayed in the area to be eliminated, that is, the user can select the area to be eliminated according to needs and their own conditions .

S202.判断所述用户脸部表情与待消除区域内预设表情匹配度是否大于预设阈值;预设阈值一般用来判断用户脸部表情和待消除区域内预设表情的匹配度,且预设的阈值可以根据需要进行调整,当所述匹配度大于预设阈值时,进入步骤S103。当所述匹配度小于预设阈值时,返回步骤S101。S202. Determine whether the matching degree between the user's facial expression and the preset expression in the area to be eliminated is greater than a preset threshold; the preset threshold is generally used to judge the matching degree between the user's facial expression and the preset expression in the area to be eliminated, and the predetermined The set threshold can be adjusted as needed, and when the matching degree is greater than the preset threshold, go to step S103. When the matching degree is less than the preset threshold, return to step S101.

S203.消除所述待消除区域,并控制所述虚拟目标物品向所述虚拟目标区域移动。将待消除区域消除后,则虚拟目标物品判断是否可以朝向虚拟目标区域,若可以朝向虚拟目标区域移动,则虚拟目标物品朝向虚拟目标区域移动,若不可以朝向虚拟目标区域移动,则虚拟目标物品位置不变。S203. Eliminate the area to be eliminated, and control the virtual target item to move to the virtual target area. After the area to be eliminated is eliminated, the virtual target item judges whether it can move towards the virtual target area. If it can move towards the virtual target area, the virtual target item moves towards the virtual target area. If it cannot move towards the virtual target area, the virtual target item The position is unchanged.

作为可选的实施例,步骤S203可以包括:在消除所述待消除区域后,虚拟目标物品移动至虚拟目标区域,此时若计时时钟尚未结束,则游戏结束并且投放组件2将目标物品移动至目标区域,可以供用户拿取该目标物品。其中,在投放至虚拟目标物品移动至虚拟目标区域时,目标物品的移动可以随着虚拟目标物品的移动而随之移动,并最终移动至目标区域,或者目标物品始终保持不动,直至虚拟目标物品移动至虚拟目标区域,此处不作唯一限定。在本实施例中,对于虚拟目标物品移动至虚拟目标区域后,且计时时钟尚未结束时,此时游戏可以直接结束或者重新设置虚拟目标物品和虚拟目标区域,并重复S201至S203的步骤,直至计时时钟结束。As an optional embodiment, step S203 may include: after eliminating the area to be eliminated, the virtual target item moves to the virtual target area, and if the timing clock has not ended at this time, the game ends and the delivery component 2 moves the target item to the virtual target area. The target area can be used for the user to pick up the target item. Among them, when the virtual target item is dropped into the virtual target area, the movement of the target item can move along with the movement of the virtual target item, and finally move to the target area, or the target item remains still until the virtual target The item moves to the virtual target area, which is not uniquely limited here. In this embodiment, after the virtual target item has moved to the virtual target area and the timing clock has not yet ended, the game can end directly or reset the virtual target item and the virtual target area, and repeat the steps from S201 to S203 until The timer clock is over.

实施例3Example 3

本发明实施例提供了一种人机交互装置,如图3所示,适用于交互设备,用于控制目标物品向目标区域移动,所述交互设备包括显示模块40和可收纳目标物品的投放模块50,所述目标物品和所述目标区域分别在所述显示模块40上映射有虚拟目标物品和虚拟目标区域,在所述虚拟目标物品和所述虚拟目标区域之间具有多个待消除区域,所述装置包括:An embodiment of the present invention provides a human-computer interaction device, as shown in FIG. 3 , which is suitable for an interactive device for controlling the movement of a target item to a target area. The interactive device includes a display module 40 and a delivery module capable of accommodating the target item 50, the target item and the target area are respectively mapped with virtual target items and virtual target areas on the display module 40, and there are multiple areas to be eliminated between the virtual target items and the virtual target area, The devices include:

包括获取模块10、判断模块20和控制模块30,获取模块10用于分别获取待消除区域内的预设表情和用户脸部表情;判断模块20,用于判断所述用户脸部表情与待消除区域内预设表情匹配度是否大于预设阈值;以及控制模块30,当所述匹配度大于预设阈值时,消除所述待消除区域,并控制虚拟目标物品向虚拟目标区域移动。Comprise acquisition module 10, judgment module 20 and control module 30, acquisition module 10 is used for obtaining preset expression and user facial expression in the area to be eliminated respectively; Judgment module 20, is used for judging described user facial expression and to be eliminated Whether the matching degree of preset expressions in the area is greater than a preset threshold; and the control module 30, when the matching degree is greater than a preset threshold, eliminates the area to be eliminated, and controls the virtual target item to move to the virtual target area.

实施例4Example 4

本发明实施例还提供了一种控制器4,如图4所示,该控制器4包括至少一个处理器41;以及与所述至少一个处理器41通信连接的存储器42;其中,所述存储器42存储有可被所述一个处理器41执行的指令,所述指令被所述至少一个处理器41执行,以使所述至少一个处理器41执行上述任一实施例中所述的人机交互方法。The embodiment of the present invention also provides a controller 4. As shown in FIG. 4, the controller 4 includes at least one processor 41; and a memory 42 communicated with the at least one processor 41; wherein the memory 42 stores instructions that can be executed by the one processor 41, and the instructions are executed by the at least one processor 41, so that the at least one processor 41 performs the human-computer interaction described in any of the above-mentioned embodiments method.

处理器41可以为中央处理器41(Central Processing Unit,CPU)。处理器41还可以为其他通用处理器41、数字信号处理器41(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等芯片,或者上述各类芯片的组合。通用处理器41可以是微处理器41或者该处理器41也可以是任何常规的处理器41等。The processor 41 may be a central processing unit 41 (Central Processing Unit, CPU). Processor 41 can also be other general purpose processor 41, digital signal processor 41 (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), Field Programmable Gate Array (Field-Programmable Gate Array, FPGA) ) or other chips such as programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or a combination of the above-mentioned types of chips. The general-purpose processor 41 may be a microprocessor 41 or the processor 41 may be any conventional processor 41 or the like.

存储器42作为一种非暂态计算机可读存储介质,可用于存储非暂态软件程序、非暂态计算机可执行程序以及模块,如本申请实施例中的控制方法对应的程序指令/模块。处理器41通过运行存储在存储器42中的非暂态软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现上述方法实施例的控制方法。As a non-transitory computer-readable storage medium, the memory 42 can be used to store non-transitory software programs, non-transitory computer-executable programs and modules, such as program instructions/modules corresponding to the control method in the embodiment of the present application. The processor 41 executes various functional applications and data processing of the server by running the non-transitory software programs, instructions and modules stored in the memory 42, that is, implements the control method of the above method embodiment.

存储器42可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据服务器操作的处理装置的使用所创建的数据等。此外,存储器42可以包括高速随机存取存储器42,还可以包括非暂态存储器42,例如至少一个磁盘存储器42件、闪存器件、或其他非暂态固态存储器42件。在一些实施例中,存储器42可选包括相对于处理器41远程设置的存储器42,这些远程存储器42可以通过网络连接至网络连接装置。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 42 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function; the data storage area may store data created according to use of a processing device operated by a server, and the like. In addition, memory 42 may include high-speed random access memory 42, and may also include non-transitory memory 42, such as at least one disk storage 42, flash memory device, or other non-transitory solid-state memory 42. In some embodiments, the memory 42 may optionally include memory 42 located remotely relative to the processor 41, and these remote memories 42 may be connected to a network connection device through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.

输入装置可接收输入的数字或字符信息,以及产生与服务器的处理装置的用户设置以及功能控制有关的键信号输入。输出装置可包括显示屏等显示设备。The input device can receive input numbers or character information, and generate key signal input related to user setting and function control of the processing device of the server. The output device may include a display device such as a display screen.

一个或者多个模块存储在存储器42中,当被一个或者多个处理器41执行时,执行如图1所示的方法。One or more modules are stored in the memory 42, and when executed by the one or more processors 41, perform the method shown in FIG. 1 .

实施例5Example 5

本发明实施例还提供了一种交互设备,请参与图4,该交互设备包括如上述具体实施方式中的控制器4、显示器1和投放组件2,所述显示器1与所述控制器4连接,所述显示器1的一侧设有图像采集装置3;投放组件2与所述控制器4连接,用于收纳目标物品,并在所述控制器4的控制下将所述目标物品投放至目标区域。The embodiment of the present invention also provides an interactive device, please refer to Figure 4, the interactive device includes the controller 4, the display 1 and the delivery component 2 as in the above-mentioned specific embodiment, the display 1 is connected to the controller 4 , one side of the display 1 is provided with an image acquisition device 3; the delivery assembly 2 is connected with the controller 4 for storing the target item, and under the control of the controller 4, the target item is dropped into the target area.

本发明提供的交互设备,与现有技术相比,在交互设备中将投放组件2能够投放的目标物品与显示器1中的虚拟目标物品相对应,通过将待消除区域消除,能够缩短虚拟目标物品和虚拟目标区域之间距离,继而可以将虚拟目标物品向虚拟目标区域移动,从而控制投放组件2将目标物品移动至目标区域。在对消除区域消除时,通过图像采集装置3将用户脸部表情进行采集,将用户脸部表情和预设表情作对比并根据匹配度消除待消除区域,其区别于传统的消除方式,能够使得用户与该交互设备有更强的交互性,目标物品与虚拟目标物品实现联动,从而能够增强目标物品和用户之间的交互性,更具有可玩性且人机交互更紧密。Compared with the prior art, the interactive device provided by the present invention corresponds the target item that can be dropped by the delivery component 2 to the virtual target item in the display 1 in the interactive device. By eliminating the area to be eliminated, the virtual target item can be shortened. and the virtual target area, and then the virtual target item can be moved to the virtual target area, thereby controlling the delivery component 2 to move the target item to the target area. When eliminating the area to be eliminated, the facial expression of the user is collected by the image acquisition device 3, the facial expression of the user is compared with the preset expression and the area to be eliminated is eliminated according to the matching degree, which is different from the traditional elimination method and can make The user has stronger interactivity with the interactive device, and the target item is linked with the virtual target item, so that the interaction between the target item and the user can be enhanced, and it is more playable and the human-computer interaction is closer.

进一步地,请参阅4,作为本发明提供的交互设备的一种具体实施方式,还包括指令输入器5和指令输出器6,所述指令输入器5和所述指令输出器6均与所述控制器4相连接,所述指令输入器5包括麦克风、摇杆、触屏或键盘的任一种或多种;所述指令输出器6包括照明灯或扬声器的任一种或多种。具体的,指令输入器5和指令输出器6均和控制器4相连接,指令输入器5可以朝控制器4内输入指令,其指令的输入方式可以通过麦克风、摇杆、触屏或键盘的任一种或多种,即朝向控制器4发出声音指令、方向指令或其他指令,从而对控制器4进行控制,从而控制选取待消除区域。指令输出器6可以将控制器4发出的指令输出,其指令输出方式可以为声音、灯光或显示器1上的屏幕显示文字等,其可以反馈出目前的游戏运行状态,给用户全面的反馈使得在操作过程中体验感、沉浸感和可操作性更强。Further, please refer to 4. As a specific embodiment of the interactive device provided by the present invention, it also includes an instruction input device 5 and an instruction output device 6, and the instruction input device 5 and the instruction output device 6 are both connected to the The controller 4 is connected, and the instruction input device 5 includes any one or more of a microphone, rocker, touch screen or keyboard; the instruction output device 6 includes any one or more of a lighting lamp or a speaker. Specifically, both the command input device 5 and the command output device 6 are connected to the controller 4, and the command input device 5 can input commands toward the controller 4, and the command input mode can be through a microphone, a joystick, a touch screen or a keyboard. Any one or more, that is, send out voice instructions, direction instructions or other instructions toward the controller 4, so as to control the controller 4, thereby controlling the selection of the area to be eliminated. The command output device 6 can output the command sent by the controller 4, and its command output mode can be sound, light or screen display text on the display 1, etc., which can feed back the current game running state and give the user comprehensive feedback. The sense of experience, immersion and operability are stronger during operation.

显然,上述实施例仅仅是为清楚地说明所作的举例,而并非对实施方式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式的变化或变动。这里无需也无法对所有的实施方式予以穷举。而由此所引伸出的显而易见的变化或变动仍处于本发明创造的保护范围之中。Apparently, the above-mentioned embodiments are only examples for clear description, rather than limiting the implementation. For those of ordinary skill in the art, changes or modifications in other different forms can also be made on the basis of the above description. There is no need and cannot be exhaustive of all implementations here. And the obvious changes or changes derived from this are still within the protection scope of the present invention.

Claims (11)

1. man-machine interaction method is suitable for interactive device, which is characterized in that it is mobile to target area for controlling target item, The interactive device includes display and the dispensing component that can store target item, and the target item and the target area are divided Virtual target article and virtual target area are not mapped on the display, in the virtual target article and described virtual There are multiple regions to be canceled between target area, which comprises
The default expression and user's countenance in region to be canceled are obtained respectively;
Judge to preset whether expression matching degree is greater than preset threshold in user's countenance and region to be canceled;
When it is described match degree is greater than the preset threshold when, eliminate the region to be canceled, and control the virtual target article to institute It is mobile to state virtual target area.
2. man-machine interaction method as described in claim 1, it is characterised in that: in the preset table obtained in region to be canceled Include: before feelings
Obtain sign on;
Elapsed time clock is set according to sign on.
3. man-machine interaction method as claimed in claim 2, which is characterized in that
It is described when it is described match degree is greater than the preset threshold when, eliminate the region to be canceled, and control the virtual target article And the repetition default expression that respectively obtains to be canceled region in mobile to the virtual target area and user's countenance, Judge the step of whether expression matching degree is greater than preset threshold is preset in user's countenance and region to be canceled, until meter When terminate or the virtual target article is moved to the virtual target area.
4. man-machine interaction method as claimed in claim 2, which is characterized in that
Described when the matching degree is less than preset threshold, output matches unsuccessful standby signal for characterizing, and repetition institute The default expression and user's countenance obtained in region to be canceled respectively is stated, judges user's countenance and area to be canceled The step of whether expression matching degree is greater than preset threshold is preset in domain, match degree is greater than the preset threshold or timing knot until described Beam.
5. man-machine interaction method as claimed in claim 2, which is characterized in that
The region to be canceled is being eliminated, and is controlling the virtual target article and is also wrapped later to the virtual target area is mobile It includes:
Judging the elapsed time clock terminates whether foregoing description virtual target article reaches the virtual target area;
When the virtual target article reaches virtual target area, controls the target item and be moved to the target area.
6. man-machine interaction method as claimed in claim 5, which is characterized in that
When the virtual target article does not reach the virtual target area, controls the target item and return to initial position.
7. man-machine interaction method as described in claim 1, which is characterized in that
Judge user's countenance with preset in region to be canceled expression matching degree whether greater than preset threshold include:
Obtain the feature point set of the default expression;
Face characteristic point set is extracted in the user's face expression;
Calculate the matching degree of the feature point set of the face characteristic point set and default expression.
8. human-computer interaction device is suitable for interactive device, which is characterized in that it is mobile to target area for controlling target item, The interactive device includes display module and the putting module that can store target item, the target item and the target area Virtual target article and virtual target area are mapped on the display module respectively, in the virtual target article and described There are multiple regions to be canceled, described device includes: between virtual target area
Module is obtained, for obtaining default expression and user's countenance in region to be canceled respectively;
Judgment module, for judging it is default whether user's countenance and expression matching degree default in region to be canceled are greater than Threshold value;And
Control module, when it is described match degree is greater than the preset threshold when, eliminate the region to be canceled, and control virtual target article It is mobile to virtual target area.
9. controller characterized by comprising
At least one processor;And
The memory being connect at least one described processor communication;Wherein, be stored with can be by one place for the memory The instruction that device executes is managed, described instruction is executed by least one described processor, so that at least one described processor right of execution Benefit requires man-machine interaction method described in any one of 1-7.
10. interactive device characterized by comprising
Image collecting device controller as claimed in claim 9;
Display is connect with the controller, and the side of the display is equipped with image collecting device;
Component is launched, is connect with the controller, for storing target item, and by the mesh under the control of the controller Mark article is thrown to target area image acquisition device.
11. interactive device as claimed in claim 10, it is characterised in that: further include instruction input device and command output device, institute It states instruction input device and described instruction follower is connected with the controller, described instruction loader includes microphone, shakes Bar, touch screen or keyboard it is any one or more;Described instruction follower includes any one or more of headlamp or loudspeaker.
CN201910329020.6A 2019-04-23 2019-04-23 Human-computer interaction method, device, controller and interactive device Active CN110084979B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910329020.6A CN110084979B (en) 2019-04-23 2019-04-23 Human-computer interaction method, device, controller and interactive device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910329020.6A CN110084979B (en) 2019-04-23 2019-04-23 Human-computer interaction method, device, controller and interactive device

Publications (2)

Publication Number Publication Date
CN110084979A true CN110084979A (en) 2019-08-02
CN110084979B CN110084979B (en) 2022-05-10

Family

ID=67416303

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910329020.6A Active CN110084979B (en) 2019-04-23 2019-04-23 Human-computer interaction method, device, controller and interactive device

Country Status (1)

Country Link
CN (1) CN110084979B (en)

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020123377A1 (en) * 2001-03-01 2002-09-05 Barry Shulman Computer assisted poker tournament
CN1872373A (en) * 2005-05-31 2006-12-06 阿鲁策株式会社 Player authentication device, player management server, play machine and interlayer device
CN1975748A (en) * 2006-12-15 2007-06-06 浙江大学 Virtual network Marathon body-building game method
CN101098463A (en) * 2007-07-12 2008-01-02 浙江大学 Intelligent network camera with the function of protecting fixed targets
CN101681539A (en) * 2006-06-05 2010-03-24 Igt公司 Simulating a real gaming environment with an interactive host and player
CN104915658A (en) * 2015-06-30 2015-09-16 东南大学 Emotion component analyzing method and system based on emotion distribution learning
CN104976999A (en) * 2015-06-30 2015-10-14 北京奇虎科技有限公司 A method and device for searching items based on a mobile device
CN105139525A (en) * 2015-08-26 2015-12-09 碧塔海成都企业管理咨询有限责任公司 Vending machine and automatic vending method
CN106600844A (en) * 2016-12-21 2017-04-26 谢代英 Compound type claw crane and selling method thereof
CN107329644A (en) * 2016-04-29 2017-11-07 宇龙计算机通信科技(深圳)有限公司 A kind of icon moving method and device
CN107362528A (en) * 2016-05-13 2017-11-21 环球娱乐株式会社 Tackle device and game machine
CN107452163A (en) * 2017-07-21 2017-12-08 沈阳中钞信达金融设备有限公司 A kind of automatic automatic selling supermarket system
CN107480178A (en) * 2017-07-01 2017-12-15 广州深域信息科技有限公司 A kind of pedestrian's recognition methods again compared based on image and video cross-module state
CN108269307A (en) * 2018-01-15 2018-07-10 歌尔科技有限公司 A kind of augmented reality exchange method and equipment
CN108694740A (en) * 2017-03-06 2018-10-23 索尼公司 Information processing equipment, information processing method and user equipment
CN108765780A (en) * 2018-04-27 2018-11-06 北京云点联动科技发展有限公司 A kind of doll machine and its application method based on recognition of face
CN108875335A (en) * 2017-10-23 2018-11-23 北京旷视科技有限公司 The method and authenticating device and non-volatile memory medium of face unlock and typing expression and facial expressions and acts
CN109243101A (en) * 2018-09-14 2019-01-18 深圳市丰巢科技有限公司 A kind of method, express delivery cabinet and storage medium grabbing article
CN109284591A (en) * 2018-08-17 2019-01-29 北京小米移动软件有限公司 Face unlocking method and device
CN109414612A (en) * 2016-04-19 2019-03-01 S·萨米特 Virtual reality haptic systems and devices
CN109461273A (en) * 2018-10-22 2019-03-12 广州扬盛计算机软件有限公司 A kind of bluetooth doll machine and its control method
CN109544821A (en) * 2018-11-21 2019-03-29 网易(杭州)网络有限公司 A kind of information processing method and long-range doll machine system

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020123377A1 (en) * 2001-03-01 2002-09-05 Barry Shulman Computer assisted poker tournament
CN1872373A (en) * 2005-05-31 2006-12-06 阿鲁策株式会社 Player authentication device, player management server, play machine and interlayer device
CN101681539A (en) * 2006-06-05 2010-03-24 Igt公司 Simulating a real gaming environment with an interactive host and player
CN1975748A (en) * 2006-12-15 2007-06-06 浙江大学 Virtual network Marathon body-building game method
CN101098463A (en) * 2007-07-12 2008-01-02 浙江大学 Intelligent network camera with the function of protecting fixed targets
CN104915658A (en) * 2015-06-30 2015-09-16 东南大学 Emotion component analyzing method and system based on emotion distribution learning
CN104976999A (en) * 2015-06-30 2015-10-14 北京奇虎科技有限公司 A method and device for searching items based on a mobile device
CN105139525A (en) * 2015-08-26 2015-12-09 碧塔海成都企业管理咨询有限责任公司 Vending machine and automatic vending method
CN109414612A (en) * 2016-04-19 2019-03-01 S·萨米特 Virtual reality haptic systems and devices
CN107329644A (en) * 2016-04-29 2017-11-07 宇龙计算机通信科技(深圳)有限公司 A kind of icon moving method and device
CN107362528A (en) * 2016-05-13 2017-11-21 环球娱乐株式会社 Tackle device and game machine
CN106600844A (en) * 2016-12-21 2017-04-26 谢代英 Compound type claw crane and selling method thereof
CN108694740A (en) * 2017-03-06 2018-10-23 索尼公司 Information processing equipment, information processing method and user equipment
CN107480178A (en) * 2017-07-01 2017-12-15 广州深域信息科技有限公司 A kind of pedestrian's recognition methods again compared based on image and video cross-module state
CN107452163A (en) * 2017-07-21 2017-12-08 沈阳中钞信达金融设备有限公司 A kind of automatic automatic selling supermarket system
CN108875335A (en) * 2017-10-23 2018-11-23 北京旷视科技有限公司 The method and authenticating device and non-volatile memory medium of face unlock and typing expression and facial expressions and acts
CN108269307A (en) * 2018-01-15 2018-07-10 歌尔科技有限公司 A kind of augmented reality exchange method and equipment
CN108765780A (en) * 2018-04-27 2018-11-06 北京云点联动科技发展有限公司 A kind of doll machine and its application method based on recognition of face
CN109284591A (en) * 2018-08-17 2019-01-29 北京小米移动软件有限公司 Face unlocking method and device
CN109243101A (en) * 2018-09-14 2019-01-18 深圳市丰巢科技有限公司 A kind of method, express delivery cabinet and storage medium grabbing article
CN109461273A (en) * 2018-10-22 2019-03-12 广州扬盛计算机软件有限公司 A kind of bluetooth doll machine and its control method
CN109544821A (en) * 2018-11-21 2019-03-29 网易(杭州)网络有限公司 A kind of information processing method and long-range doll machine system

Also Published As

Publication number Publication date
CN110084979B (en) 2022-05-10

Similar Documents

Publication Publication Date Title
JP5991773B2 (en) Game machine
WO2018113653A1 (en) Scene switching method based on mobile terminal, and mobile terminal
JP2016135181A (en) Game machine system, game machine, and portable terminal
US20170087453A1 (en) Magic wand methods, apparatuses and systems for defining, initiating, and conducting quests
US11270087B2 (en) Object scanning method based on mobile terminal and mobile terminal
RU2011116297A (en) SYSTEM FOR MUSICAL INTERACTION OF AVATARS
JP6771038B2 (en) Set up a game session to reduce latency
US20150072788A1 (en) Game system capable of displaying comments, comment display control method, and computer redable storage medium
JP2017121021A (en) System, method and program for distributing digital content
JP2017121036A (en) System, method and program for distributing digital content
CN111298430A (en) Virtual item control method and device, storage medium and electronic device
KR101487077B1 (en) Game system and control method thereof
CN103405911A (en) Method and system for prompting mahjong draws
WO2025011289A1 (en) Method and apparatus for generating game plot, and electronic device
CN106097003A (en) Method, equipment and the system that a kind of virtual coin reassigns
CN110559657B (en) Network game control method, device and storage medium
CN112473139B (en) Object form switching method and device, storage medium, and electronic device
CN114146426A (en) A control method, device, computer equipment and storage medium for a secret room game
JP2014195527A (en) Server system and program
JP2013230226A (en) Game management server apparatus, program for game management server apparatus, and program for terminal device
CN110025961A (en) a game system
JP5848614B2 (en) Pachinko machine
CN110084979A (en) Man-machine interaction method, device and controller and interactive device
WO2025077421A1 (en) Song playback method and apparatus, electronic device, and storage medium
CN116650950B (en) Control system and method for VR game

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Human computer interaction methods, devices, controllers, and interactive equipment

Granted publication date: 20220510

Pledgee: Agricultural Bank of China Limited Guangzhou Development Zone Branch

Pledgor: DMAI (GUANGZHOU) Co.,Ltd.

Registration number: Y2025980014760

PE01 Entry into force of the registration of the contract for pledge of patent right