CN108563331A - Act matching result determining device, method, readable storage medium storing program for executing and interactive device - Google Patents
Act matching result determining device, method, readable storage medium storing program for executing and interactive device Download PDFInfo
- Publication number
- CN108563331A CN108563331A CN201810301244.1A CN201810301244A CN108563331A CN 108563331 A CN108563331 A CN 108563331A CN 201810301244 A CN201810301244 A CN 201810301244A CN 108563331 A CN108563331 A CN 108563331A
- Authority
- CN
- China
- Prior art keywords
- score
- action
- matching
- human
- computer interaction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本公开涉及人工智能领域,具体而言,涉及一种人机交互中的动作匹配结果确定装置、人机交互中的动作匹配结果确定方法、计算机可读存储介质及人机交互装置。The present disclosure relates to the field of artificial intelligence, in particular, to a device for determining action matching results in human-computer interaction, a method for determining action matching results in human-computer interaction, a computer-readable storage medium, and a human-computer interaction device.
背景技术Background technique
本公开对于背景技术的描述属于与本公开相关的相关技术,仅仅是用于说明和便于理解本公开的发明内容,不应理解为申请人明确认为或推定申请人认为是本公开在首次提出申请的申请日的现有技术。The description of the background technology in the present disclosure belongs to the relevant technology related to the present disclosure, and is only used for illustration and to facilitate the understanding of the content of the present disclosure. prior art on the filing date.
近年来,动作捕捉技术已经成为人体运动姿态研究中的一项关键技术,发挥着越来越重要的作用,人们意识到非常有必要通过识别人体运动姿态实现人体动作和信息设备之间的交互功能。然而已有动作捕捉技术一般应用于大型娱乐设备、动画制作、步态分析、生物力学、人机工程等领域,而随着手机、平板电脑等移动设备的使用普及,手机、平板电脑等移动设备以简单、方便、不受时间和地点限制等特点成为人们娱乐消遣必备品,而在手机、平板电脑等移动设备中实现良好的显示、交互效果的动作捕捉是亟待的问题。In recent years, motion capture technology has become a key technology in the research of human motion posture, playing an increasingly important role. People realize that it is very necessary to realize the interactive function between human motion and information equipment by recognizing human motion posture . However, the existing motion capture technology is generally used in large-scale entertainment equipment, animation production, gait analysis, biomechanics, ergonomics and other fields. With the popularity of mobile devices such as mobile phones and tablet computers, mobile devices such as mobile phones and tablet computers Simple, convenient, and not limited by time and place, it has become a must-have for people's entertainment, and it is an urgent problem to realize motion capture with good display and interactive effects in mobile devices such as mobile phones and tablet computers.
发明内容Contents of the invention
本公开第一方面的实施例提供了一种人机交互中的动作匹配结果确定方法,包括:The embodiment of the first aspect of the present disclosure provides a method for determining an action matching result in human-computer interaction, including:
在显示单元上显示指示图像,采集人的动作信息;Display the instruction image on the display unit, and collect the motion information of the person;
每隔设定时间将所述动作信息与预存的模板计算得到总分数;Calculate the total score by calculating the action information and the pre-stored template every set time;
在所述显示单元上显示与所述总分数相对应的匹配结果。A matching result corresponding to the total score is displayed on the display unit.
优选地,所述总分数包括匹配分数;其中,所述匹配分数由如下步骤获得:Preferably, the total score includes a matching score; wherein, the matching score is obtained by the following steps:
每隔设定时间从所述动作信息中的动作图像中提取多个单帧图像;Extract multiple single-frame images from the action images in the action information at set time intervals;
将每个所述单帧图像与预先存储的所述模板匹配;matching each of the single-frame images with the pre-stored template;
根据多个所述单帧图像与所述模板匹配率计算得到所述匹配分数。The matching score is calculated according to the matching ratios of the plurality of single-frame images and the template.
优选地,所述总分数还包括偏移分数;所述偏移分数由如下步骤获得:Preferably, the total score also includes an offset score; the offset score is obtained by the following steps:
根据所述动作信息中的人做动作的时间与所述设定时间计算得到偏移分数。The offset score is calculated according to the time when the person performs the action in the action information and the set time.
优选地,根据所述动作信息中的人做动作的时间与所述设定时间计算得到偏移分数的计算公式为:Preferably, the formula for calculating the offset score is calculated according to the time when the person does the action in the action information and the set time:
ΔT=|T1-T2|;ΔT = |T 1 -T 2 |;
T偏为偏移分数;ΔT为偏移时间;T1为设定时间;X为随机常数,且X>0;T2为所述动作信息中的人做动作的时间。T bias is the offset score; ΔT is the offset time; T 1 is the set time; X is a random constant, and X>0; T 2 is the time when the person in the action information performs the action.
优选地,所述总分数还包括预存的随机分数。Preferably, the total score also includes a pre-stored random score.
优选地,所述总分数的计算公式为:Preferably, the formula for calculating the total score is:
T总=N1*T匹+N2*T偏+N3*T随;T total = N 1 * T horse + N 2 * T partial + N 3 * T with ;
N1+N2+N3=100%;N 1 +N 2 +N 3 = 100%;
T总为总分数,T匹为匹配分数,T偏为偏移分数,T随为随机分数,N1、N2和N3为随机百分数,且N1>0,N2≥0,N3≥0。T total is the total score, T match is the matching score, T bias is the offset score, T random is the random score, N 1 , N 2 and N 3 are random percentages, and N 1 > 0, N 2 ≥ 0, N 3 ≥0.
优选地,在所述显示单元上显示与所述总分数相对应的匹配结果具体包括:Preferably, displaying the matching result corresponding to the total score on the display unit specifically includes:
在所述显示单元上显示与所述总分数相对应的分数和/或动画。Scores and/or animations corresponding to the total points are displayed on the display unit.
本公开第二方面的实施例提供了一种人机交互中的动作匹配结果确定装置,包括:交互模块,用于在显示单元上显示指示图像,采集人的动作信息;评分模块,用于每隔设定时间将所述动作信息与预存的模板运算得到总分数;和执行模块,用于控制在显示单元上显示与所述总分数相对应的匹配结果。The embodiment of the second aspect of the present disclosure provides a device for determining action matching results in human-computer interaction, including: an interaction module, configured to display an instruction image on a display unit, and collect human action information; a scoring module, used for each Computing the action information and the pre-stored template at intervals of a set time to obtain a total score; and an execution module, configured to control displaying a matching result corresponding to the total score on the display unit.
优选地,所述评分模块包括:提取单元,用于每隔设定时间从所述动作信息中的动作图像中提取多个单帧图像;匹配单元,用于将每个所述单帧图像与预先存储的所述模板匹配;匹配得分单元,用于根据多个所述单帧图像与所述模板的匹配率计算得到匹配分数;和总分单元,用于根据所述匹配分数计算得到所述总得分。Preferably, the scoring module includes: an extraction unit, configured to extract a plurality of single-frame images from the action images in the action information at set time intervals; a matching unit, configured to compare each of the single-frame images with The pre-stored template matching; the matching score unit is used to calculate the matching score according to the matching ratio between the multiple single-frame images and the template; and the total score unit is used to calculate the matching score according to the matching score. Total Score.
优选地,所述评分模块还包括:偏移得分单元,用于根据所述动作信息中的人做动作的时间与所述设定时间计算得到偏移分数;所述总分单元根据所述偏移分数和所述匹配分数计算得到所述总分数。Preferably, the scoring module further includes: an offset score unit, configured to calculate an offset score based on the time when a person performs an action in the action information and the set time; the total score unit calculates an offset score based on the offset The shift score and the matching score are calculated to obtain the total score.
优选地,所述评分模块还包括:随机得分单元,用于提取的预存的随机分数;所述总分单元根据所述偏移分数、所述匹配分数和所述随机分数计算得到所述总分数。Preferably, the scoring module further includes: a random scoring unit, which is used to extract a pre-stored random score; the total score unit calculates the total score according to the offset score, the matching score and the random score .
本公开第三方面的实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述任一项所述人机交互中的动作匹配结果确定方法的步骤。The embodiment of the third aspect of the present disclosure provides a computer-readable storage medium, on which a computer program is stored. When the program is executed by a processor, the method for determining an action matching result in human-computer interaction described in any one of the above-mentioned methods is implemented. step.
本公开第四方面的实施例提供了一种人机交互装置,包括存储器、处理器及存储在存储器上并可在处理器上运行的程序,所述处理器执行所述程序时实现上述任一项所述人机交互中的动作匹配结果确定方法的步骤。The embodiment of the fourth aspect of the present disclosure provides a human-computer interaction device, including a memory, a processor, and a program stored in the memory and operable on the processor. When the processor executes the program, any of the above-mentioned The steps of the method for determining the action matching result in the human-computer interaction described in the item.
本发明提供的技术方案,显示单元(显示单元可为显示屏等)上显示指示图像(如呈不同姿态多个火柴人、动画、动物图像等),用户去做跟这些指示图像相同的肢体动作,使用户形成跳舞的动作,同时,采集用户的图像,对比将人的动作图像与坐标点匹配,并根据人的动作信息与预存的模板运算得出总分数并将与总分数相对应的匹配结果(如分数和/或者动画特效)在显示单元上显示,对不太会跳舞的用户有引导作用,使用户能够做标准的舞蹈动作,提高了娱乐效果,从而提高了用户的体验效果;另外,用户每做完一个动作,将用户的动作与预存的模板,该种检测方式能够实现精准的检测,提高了产品的检测精度,从而提高了产品的体验效果。According to the technical solution provided by the present invention, the display unit (the display unit can be a display screen, etc.) displays instruction images (such as a plurality of stick figures in different postures, animations, animal images, etc.), and the user performs the same body movements as these instruction images , so that the user forms a dancing action, and at the same time, collect the user's image, compare and match the human action image with the coordinate points, and calculate the total score according to the human action information and the pre-stored template, and match the corresponding total score The results (such as scores and/or animation special effects) are displayed on the display unit, which can guide users who are not very good at dancing, so that users can do standard dance moves, which improves the entertainment effect, thereby improving the user's experience effect; in addition , each time the user completes an action, the user's action is compared with the pre-stored template. This detection method can achieve accurate detection, improve the detection accuracy of the product, and thus improve the experience of the product.
本公开的附加方面和优点将在下面的描述部分中变得明显,或通过本公开的实践了解到。Additional aspects and advantages of the disclosure will become apparent in the description which follows, or may be learned by practice of the disclosure.
附图说明Description of drawings
本公开的上述和/或附加的方面和优点从结合下面附图对实施例的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present disclosure will become apparent and comprehensible from the description of the embodiments in conjunction with the following drawings, in which:
图1为本公开实施例的终端设备的硬件结构示意图;FIG. 1 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present disclosure;
图2是本公开所述人机交互中的动作匹配结果确定装置第一种实施例的结构框图;Fig. 2 is a structural block diagram of a first embodiment of an action matching result determination device in human-computer interaction according to the present disclosure;
图3是本公开所述人机交互中的动作匹配结果确定装置第二种实施例的结构框图;Fig. 3 is a structural block diagram of a second embodiment of the device for determining action matching results in human-computer interaction according to the present disclosure;
图4是本公开所述人机交互中的动作匹配结果确定装置第三种实施例的结构框图;Fig. 4 is a structural block diagram of a third embodiment of an action matching result determining device in human-computer interaction according to the present disclosure;
图5是本公开所述人机交互中的动作匹配结果确定装置第四种实施例的结构框图;Fig. 5 is a structural block diagram of a fourth embodiment of an action matching result determination device in human-computer interaction according to the present disclosure;
图6是本公开所述人机交互中的动作匹配结果确定方法第一种实施例的流程;FIG. 6 is a flow chart of the first embodiment of the method for determining action matching results in human-computer interaction in the present disclosure;
图7是本公开所述人机交互中的动作匹配结果确定方法第二种实施例的流程;FIG. 7 is a flow chart of the second embodiment of the method for determining action matching results in human-computer interaction in the present disclosure;
图8是本公开所述人机交互中的动作匹配结果确定方法第三种实施例的流程;FIG. 8 is a flow chart of a third embodiment of the method for determining action matching results in human-computer interaction according to the present disclosure;
图9是本公开所述人机交互中的动作匹配结果确定方法第四种实施例的流程;FIG. 9 is a flowchart of a fourth embodiment of the method for determining action matching results in human-computer interaction according to the present disclosure;
图10是本公开实施例的计算机可读存储介质的示意图;10 is a schematic diagram of a computer-readable storage medium according to an embodiment of the present disclosure;
图11是本公开实施例的人机交互装置的结构示意图。Fig. 11 is a schematic structural diagram of a human-computer interaction device according to an embodiment of the present disclosure.
其中,图1至图5、图10和图11中附图标记与部件名称之间的对应关系为:Wherein, the corresponding relationship between reference numerals and component names in Fig. 1 to Fig. 5, Fig. 10 and Fig. 11 is:
结果确定装置100,交互模块101,评分模块102,提取单元1021,匹配单元1022,匹配得分单元1023,总分单元1024,偏移得分单元1025,随机得分单元1206,执行模块103,1无线通信单元,2输入单元,3用户输入单元,4感测单元,5输出单元,6存储器,7接口单元,8控制器,9电源单元,80人机交互装置,801存储器,802处理器,900计算机可读存储介质,901非暂时性计算机可读指令。Result determination device 100, interaction module 101, scoring module 102, extraction unit 1021, matching unit 1022, matching scoring unit 1023, total scoring unit 1024, offset scoring unit 1025, random scoring unit 1206, execution module 103, 1 wireless communication unit , 2 input unit, 3 user input unit, 4 sensing unit, 5 output unit, 6 memory, 7 interface unit, 8 controller, 9 power supply unit, 80 human-computer interaction device, 801 memory, 802 processor, 900 computer can For reading a storage medium, step 901 is a non-transitory computer readable instruction.
具体实施方式Detailed ways
为了能够更清楚地理解本公开的上述目的、特征和优点,下面结合附图和具体实施方式对本公开进行进一步的详细描述。需要说明的是,在不冲突的情况下,本申请的实施例及实施例中的特征可以相互组合。In order to understand the above-mentioned purpose, features and advantages of the present disclosure more clearly, the present disclosure will be further described in detail below in conjunction with the accompanying drawings and specific embodiments. It should be noted that, in the case of no conflict, the embodiments of the present application and the features in the embodiments can be combined with each other.
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但是,本公开还可以采用其他不同于在此描述的其他方式来实施,因此,本公开的保护范围并不受下面公开的具体实施例的限制。In the following description, many specific details are set forth in order to fully understand the present disclosure. However, the present disclosure can also be implemented in other ways different from those described here. Therefore, the protection scope of the present disclosure is not limited by the specific details disclosed below. EXAMPLE LIMITATIONS.
下述讨论提供了本公开的多个实施例。虽然每个实施例代表了发明的单一组合,但是本公开不同实施例可以替换,或者合并组合,因此本公开也可认为包含所记载的相同和/或不同实施例的所有可能组合。因而,如果一个实施例包含A、B、C,另一个实施例包含B和D的组合,那么本公开也应视为包括含有A、B、C、D的一个或多个所有其他可能的组合的实施例,尽管该实施例可能并未在以下内容中有明确的文字记载。The following discussion provides several embodiments of the disclosure. Although each embodiment represents a single inventive combination, different embodiments of the disclosure may be substituted, or combined, and thus the disclosure may also be considered to encompass all possible combinations of the same and/or different embodiments described. Thus, if one embodiment includes A, B, C, and another embodiment includes a combination of B and D, then the disclosure should also be considered to include all other possible combinations including one or more of A, B, C, D Although this embodiment may not be clearly written in the following content.
如图1所示,人机交互装置即终端设备可以以各种形式来实施,本公开中的终端设备可以包括但不限于诸如移动电话、智能电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、导航装置、车载终端设备、车载显示终端、车载电子后视镜等等的移动终端设备以及诸如数字TV、台式计算机等等的固定终端设备。As shown in Figure 1, the human-computer interaction device, that is, the terminal equipment can be implemented in various forms, and the terminal equipment in the present disclosure can include but not limited to mobile phones, smart phones, notebook computers, digital broadcast receivers, PDA (personal digital assistant), PAD (tablet computer), PMP (portable multimedia player), navigation device, vehicle-mounted terminal equipment, vehicle-mounted display terminal, vehicle-mounted electronic rearview mirror, etc., and mobile terminal equipment such as digital TV, desktop computer, etc. Fixed terminal equipment.
在本公开的一个实施例中,终端设备可以包括无线通信单元1、In an embodiment of the present disclosure, a terminal device may include a wireless communication unit 1,
A/V(音频/视频)输入单元2、用户输入单元3、感测单元4、输出单元5、存储器6、接口单元7、控制器8和电源单元9等等。其中,A/V(音频/视频)输入单元2包括但不限于,摄像头、前置摄像头,后置摄像头,各类音视频输入设备。本领域的技术人员应该理解,上述实施例列出的终端设备所包括的组件,不止上述所述的种类,可以包括更少或者更多的组件。An A/V (audio/video) input unit 2, a user input unit 3, a sensing unit 4, an output unit 5, a memory 6, an interface unit 7, a controller 8, a power supply unit 9, and the like. Wherein, the A/V (audio/video) input unit 2 includes, but is not limited to, a camera, a front camera, a rear camera, and various audio and video input devices. Those skilled in the art should understand that the components included in the terminal device listed in the above embodiments are not limited to the above-mentioned types, and may include fewer or more components.
本领域的技术人员应该理解,这里描述的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控制器中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器中并且由控制器执行。Those skilled in the art will appreciate that the various embodiments described herein can be implemented on a computer-readable medium using, for example, computer software, hardware, or any combination thereof. For hardware implementation, the embodiments described herein can be implemented by using Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays ( FPGA), processors, controllers, microcontrollers, microprocessors, electronic units designed to perform the functions described herein, and in some cases, such implementations may be implemented in a controller implement. For software implementation, an embodiment such as a procedure or a function may be implemented with a separate software module that allows at least one function or operation to be performed. The software codes can be implemented by a software application (or program) written in any suitable programming language, stored in memory and executed by a controller.
如图2所示,本公开第一方面的实施例提供的人机交互中的动作匹配结果确定装置100,包括:交互模块101、评分模块102和执行模块103。As shown in FIG. 2 , the device 100 for determining an action matching result in human-computer interaction provided by the embodiment of the first aspect of the present disclosure includes: an interaction module 101 , a scoring module 102 and an execution module 103 .
具体地,交互模块101用于在显示单元上显示指示图像,并采集人的动作信息;评分模块102用于每隔设定时间将所述动作信息与预存的模板运算得到总分数;执行模块103用于控制在显示单元上显示与所述总分数相对应的匹配结果。Specifically, the interaction module 101 is used to display the instruction image on the display unit, and collect the action information of the person; the scoring module 102 is used to calculate the total score by calculating the action information and the pre-stored template every set time; the execution module 103 It is used for controlling to display the matching result corresponding to the total score on the display unit.
本公开提供的人机交互中的动作匹配结果确定装置,显示单元(显示单元可为显示屏等)上显示指示图像(如呈不同姿态多个火柴人、动画、动物图像等),用户去做跟这些指示图像相同的肢体动作,使用户形成跳舞的动作,同时,交互模块采集用户的图像,对比模块将人的动作图像与坐标点匹配,并根据人的动作信息与预存的模板运算得出总分数并将与总分数相对应的匹配结果(如分数和/或者动画特效)在显示单元上显示,对不太会跳舞的用户有引导作用,使用户能够做标准的舞蹈动作,提高了娱乐效果,从而提高了用户的体验效果;另外,用户每做完一个动作,评分模块将用户的动作与预存的模板,该种检测方式能够实现精准的检测,提高了产品的检测精度,从而提高了产品的体验效果。In the device for determining action matching results in human-computer interaction provided by the present disclosure, the display unit (the display unit can be a display screen, etc.) displays instruction images (such as multiple stick figures in different postures, animations, animal images, etc.), and the user does The same body movements as these instruction images make the user dance. At the same time, the interaction module collects the user's image, and the comparison module matches the person's movement image with the coordinate points, and calculates it based on the person's movement information and the pre-stored template. The total score and the corresponding matching result (such as score and/or animation special effects) are displayed on the display unit, which has a guiding effect on users who are not very good at dancing, enables users to do standard dance moves, and improves entertainment. effect, thereby improving the user experience effect; in addition, each time the user completes an action, the scoring module compares the user's action with the pre-stored template. This detection method can achieve accurate detection and improve the detection accuracy of the product. product experience.
在本公开的一个实施例中,如图3所示,评分模块102包括提取单元1021、匹配单元1022、匹配得分单元1023和总分单元1024。In one embodiment of the present disclosure, as shown in FIG. 3 , the scoring module 102 includes an extraction unit 1021 , a matching unit 1022 , a matching scoring unit 1023 and a total scoring unit 1024 .
具体地,提取单元1021用于每隔设定时间从动作信息中的动作图像中提取多个单帧图像;匹配单元1022用于将每个单帧图像与预先存储的模板匹配;匹配得分单元1023用于根据多个单帧图像与模板的匹配率计算得到匹配分数;总分单元1024用于根据匹配分数计算得到总得分。Specifically, the extraction unit 1021 is used to extract a plurality of single-frame images from the action images in the action information every set time; the matching unit 1022 is used to match each single-frame image with a pre-stored template; the matching score unit 1023 It is used to calculate and obtain the matching score according to the matching ratios of multiple single-frame images and the template; the total score unit 1024 is used to calculate and obtain the total score according to the matching scores.
在该实施例中,提取单元1021每隔设定时间从动作信息中的动作图像中提取多个单帧图像,如单位时间内取一百帧图像;匹配单元1022将一百帧图像与预先存储的模板匹配;判断一百帧图像的重合率,该种检测方式能够实现精准的检测,提高了产品的检测精度,从而提高了产品的体验效果;匹配得分单元1023根据一百帧图像与模板的匹配率计算得到匹配分数,比如60分、70分、80分或90分等;总分单元1024根据匹配分数计算得到总得分,比如75分、80分、85分、90分、95分等。In this embodiment, the extraction unit 1021 extracts a plurality of single-frame images from the action images in the action information every set time, such as taking one hundred frame images per unit time; the matching unit 1022 compares the one hundred frame images with the pre-stored template matching; judging the coincidence rate of one hundred frames of images, this detection method can achieve accurate detection, improve the detection accuracy of products, thereby improving the product experience effect; the matching scoring unit 1023 The matching rate is calculated to obtain a matching score, such as 60 points, 70 points, 80 points or 90 points, etc.; the total score unit 1024 is calculated according to the matching score to obtain a total score, such as 75 points, 80 points, 85 points, 90 points, 95 points, etc.
在本公开的一个实施例中,如图4所示,评分模块包括:评分模块102包括提取单元1021、匹配单元1022、匹配得分单元1023、偏移得分单元1025和总分单元1024。In one embodiment of the present disclosure, as shown in FIG. 4 , the scoring module includes: the scoring module 102 includes an extraction unit 1021 , a matching unit 1022 , a matching scoring unit 1023 , an offset scoring unit 1025 and a total scoring unit 1024 .
具体地,提取单元1021用于每隔设定时间从动作信息中的动作图像中提取多个单帧图像;匹配单元1022用于将每个单帧图像与预先存储的模板匹配;匹配得分单元1023用于根据多个单帧图像与模板的匹配率计算得到匹配分数;偏移得分单元1025用于根据动作信息中的人做动作的时间与设定时间计算得到偏移分数;总分单元1024用于根据偏移分数和匹配分数计算得到总得分。Specifically, the extraction unit 1021 is used to extract a plurality of single-frame images from the action images in the action information every set time; the matching unit 1022 is used to match each single-frame image with a pre-stored template; the matching score unit 1023 It is used to calculate the matching score according to the matching rate of a plurality of single-frame images and the template; the offset score unit 1025 is used to calculate the offset score according to the time when the person in the action information performs an action and the set time; the total score unit 1024 uses The total score is calculated based on the offset score and the matching score.
在该实施例中,提取单元1021每隔设定时间从动作信息中的动作图像中提取多个单帧图像,如单位时间内取一百帧图像;匹配单元1022将一百帧图像与预先存储的模板匹配;判断一百帧图像的重合率,该种检测方式能够实现精准的检测,提高了产品的检测精度,从而提高了产品的体验效果;匹配得分单元1023根据一百帧图像与模板的匹配率计算得到匹配分数(比如60分、70分、80分或90分等),对用户动作与模板中预存的模板;偏移得分单元1025根据动作信息中的人做动作的时间与设定时间计算得到偏移分数,比如60分、70分、80分或90分等;总分单元1024根据偏移分数和匹配分数计算得到总得分,比如75分、80分、85分、90分、95分等。In this embodiment, the extraction unit 1021 extracts a plurality of single-frame images from the action images in the action information every set time, such as taking one hundred frame images per unit time; the matching unit 1022 compares the one hundred frame images with the pre-stored template matching; judging the coincidence rate of one hundred frames of images, this detection method can achieve accurate detection, improve the detection accuracy of products, thereby improving the product experience effect; the matching scoring unit 1023 The matching rate is calculated to obtain a matching score (such as 60 points, 70 points, 80 points or 90 points, etc.), and the user action and the pre-stored template in the template; the offset score unit 1025 is based on the time and setting of the person in the action information. The time is calculated to obtain the offset score, such as 60 points, 70 points, 80 points or 90 points, etc.; the total score unit 1024 is calculated according to the offset score and the matching score to obtain the total score, such as 75 points, 80 points, 85 points, 90 points, 95 points etc.
在本公开的一个实施例中,如图5所示,评分模块包括:评分模块102包括提取单元1021、匹配单元1022、匹配得分单元1023、偏移得分单元1025、随机得分单元1206和总分单元1024。In one embodiment of the present disclosure, as shown in FIG. 5 , the scoring module includes: the scoring module 102 includes an extraction unit 1021, a matching unit 1022, a matching scoring unit 1023, an offset scoring unit 1025, a random scoring unit 1206 and a total scoring unit 1024.
具体地,提取单元1021用于每隔设定时间从动作信息中的动作图像中提取多个单帧图像;匹配单元1022用于将每个单帧图像与预先存储的模板匹配;匹配得分单元1023用于根据多个单帧图像与模板的匹配率计算得到匹配分数;偏移得分单元1025用于根据动作信息中的人做动作的时间与设定时间计算得到偏移分数;随机得分单元1206用于提取的预存的随机分数;总分单元1024用于根据偏移分数和匹配分数计算得到总得分。Specifically, the extraction unit 1021 is used to extract a plurality of single-frame images from the action images in the action information every set time; the matching unit 1022 is used to match each single-frame image with a pre-stored template; the matching score unit 1023 It is used to calculate the matching score according to the matching rate of a plurality of single-frame images and the template; the offset score unit 1025 is used to calculate the offset score according to the time when the person in the action information performs an action and the set time; the random score unit 1206 uses Based on the extracted pre-stored random score; the total score unit 1024 is used to calculate the total score according to the offset score and the matching score.
在该实施例中,提取单元1021每隔设定时间从动作信息中的动作图像中提取多个单帧图像,如单位时间内取一百帧图像;匹配单元1022将一百帧图像与预先存储的模板匹配;判断一百帧图像的重合率,该种检测方式能够实现精准的检测,提高了产品的检测精度,从而提高了产品的体验效果;匹配得分单元1023根据一百帧图像与模板的匹配率计算得到匹配分数,比如60分、70分、80分或90分等;偏移得分单元1025根据动作信息中的人做动作的时间与设定时间计算得到偏移分数,比如60分、70分、80分或90分等;随机得分单元1206提取的预存的随机分数,比如60分、70分、80分或90分等;总分单元1024根据偏移分数、匹配分数和随机分数计算得到总得分,比如75分、80分、85分、90分、95分等。In this embodiment, the extraction unit 1021 extracts a plurality of single-frame images from the action images in the action information every set time, such as taking one hundred frame images per unit time; the matching unit 1022 compares the one hundred frame images with the pre-stored template matching; judging the coincidence rate of one hundred frames of images, this detection method can achieve accurate detection, improve the detection accuracy of products, thereby improving the product experience effect; the matching scoring unit 1023 The matching rate is calculated to obtain a matching score, such as 60 points, 70 points, 80 points or 90 points, etc.; the offset score unit 1025 is calculated according to the time when the person in the action information performs an action and the set time to obtain an offset score, such as 60 points, 70 points, 80 points or 90 points, etc.; the pre-stored random scores extracted by the random score unit 1206, such as 60 points, 70 points, 80 points or 90 points, etc.; the total score unit 1024 is calculated according to the offset score, matching score and random score Get the total score, such as 75 points, 80 points, 85 points, 90 points, 95 points, etc.
实施例1Example 1
如图6所示,本公开第二方面的实施例提供的一种人机交互中的动作匹配结果确定方法,包括:As shown in FIG. 6, the embodiment of the second aspect of the present disclosure provides a method for determining an action matching result in human-computer interaction, including:
步骤10,在显示单元上显示指示图像,采集人的动作信息;Step 10, displaying an instruction image on the display unit, and collecting motion information of the person;
步骤20,每隔设定时间将动作信息与预存的模板计算得到总分数;Step 20, calculate the total score by calculating the action information and the pre-stored template every set time;
步骤30,在显示单元上显示与总分数相对应的匹配结果。Step 30, displaying the matching result corresponding to the total score on the display unit.
本公开提供的人机交互中的动作匹配结果确定方法,显示单元(显示单元可为显示屏等)上显示指示图像(如呈不同姿态多个火柴人、动画、动物图像等),用户去做跟这些指示图像相同的肢体动作,使用户形成跳舞的动作,同时,采集用户的图像,将人的动作图像与坐标点匹配,并根据人的动作信息与预存的模板运算得出总分数并将与总分数相对应的匹配结果(如分数和/或者动画特效)在显示单元上显示,对不太会跳舞的用户有引导作用,使用户能够做标准的舞蹈动作,提高了娱乐效果,从而提高了用户的体验效果;另外,用户每做完一个动作,将用户的动作与预存的模板,该种检测方式能够实现精准的检测,提高了产品的检测精度,从而提高了产品的体验效果。In the method for determining the action matching result in human-computer interaction provided by the present disclosure, the display unit (the display unit can be a display screen, etc.) displays instruction images (such as multiple stick figures in different postures, animations, animal images, etc.), and the user can do it. The same body movements as these instruction images make the user dance. At the same time, the user's image is collected, the person's movement image is matched with the coordinate points, and the total score is calculated according to the person's movement information and the pre-stored template. The matching results (such as scores and/or animation special effects) corresponding to the total score are displayed on the display unit, which guides users who are not very good at dancing, enables users to do standard dance moves, improves the entertainment effect, and improves In addition, every time the user completes an action, the user's action is compared with the pre-stored template. This detection method can achieve accurate detection, improve the detection accuracy of the product, and thus improve the product experience effect.
实施例二Embodiment two
如图7所示,在本公开的一个实施例中,步骤20包括:As shown in FIG. 7, in one embodiment of the present disclosure, step 20 includes:
步骤21,每隔设定时间从动作信息中的动作图像中提取多个单帧图像;Step 21, extracting a plurality of single-frame images from the action images in the action information every set time;
步骤22,将每个单帧图像与预先存储的模板匹配;Step 22, matching each single-frame image with a pre-stored template;
步骤23,根据多个单帧图像与模板匹配率计算得到总得分。Step 23, calculating the total score according to the matching ratios of multiple single-frame images and the template.
在本实施例中人机交互中的动作匹配结果确定方法包括:In this embodiment, the method for determining the action matching result in human-computer interaction includes:
步骤10,在显示单元上显示指示图像,采集人的动作信息;Step 10, displaying an instruction image on the display unit, and collecting motion information of the person;
步骤21,每隔设定时间从动作信息中的动作图像中提取多个单帧图像;Step 21, extracting a plurality of single-frame images from the action images in the action information every set time;
步骤22,将每个单帧图像与预先存储的模板匹配;Step 22, matching each single-frame image with a pre-stored template;
步骤23,根据多个单帧图像与模板匹配率计算得到总得分;Step 23, calculating the total score according to the matching rate of multiple single-frame images and the template;
步骤30,在显示单元上显示与总分数相对应的匹配结果。Step 30, displaying the matching result corresponding to the total score on the display unit.
在该实施例中,每隔设定时间从动作信息中的动作图像中提取多个单帧图像,如单位时间内取一百帧图像;将一百帧图像与预先存储的模板匹配;判断一百帧图像的重合率,该种检测方式能够实现精准的检测,提高了产品的检测精度,从而提高了产品的体验效果;根据一百帧图像与模板的匹配率计算得到匹配分数,比如60分、70分、80分或90分等等;根据匹配分数计算得到总得分,比如75分、80分、85分、90分、95分等。In this embodiment, multiple single-frame images are extracted from the action images in the action information every set time, such as taking one hundred frame images per unit time; matching one hundred frame images with pre-stored templates; judging one The overlap rate of 100 frames of images, this detection method can achieve accurate detection, improve the detection accuracy of the product, thereby improving the product experience effect; calculate the matching score based on the matching rate of 100 frames of images and templates, such as 60 points , 70 points, 80 points or 90 points, etc.; calculate the total score based on the matching score, such as 75 points, 80 points, 85 points, 90 points, 95 points, etc.
实施例三Embodiment three
如图8所示,在本公开的一个实施例中,步骤20包括:As shown in FIG. 8, in one embodiment of the present disclosure, step 20 includes:
步骤21,每隔设定时间从动作信息中的动作图像中提取多个单帧图像;Step 21, extracting a plurality of single-frame images from the action images in the action information every set time;
步骤22,将每个单帧图像与预先存储的模板匹配;Step 22, matching each single-frame image with a pre-stored template;
步骤23,根据多个单帧图像与模板匹配率计算得到匹配分数;Step 23, calculating the matching score according to the matching ratio of multiple single-frame images and the template;
步骤24,根据动作信息中的人做动作的时间与设定时间计算得到偏移分数;Step 24, calculate the offset score according to the time when the person in the action information does the action and the set time;
步骤25,根据偏移分数和匹配得分计算得到总得分。Step 25, calculate the total score according to the offset score and the matching score.
在本实施例中人机交互中的动作匹配结果确定方法包括:In this embodiment, the method for determining the action matching result in human-computer interaction includes:
步骤10,在显示单元上显示指示图像,采集人的动作信息;Step 10, displaying an instruction image on the display unit, and collecting motion information of the person;
步骤20,步骤21,每隔设定时间从动作信息中的动作图像中提取多个单帧图像;Step 20, step 21, extracting a plurality of single-frame images from the action images in the action information every set time;
步骤22,将每个单帧图像与预先存储的模板匹配;Step 22, matching each single-frame image with a pre-stored template;
步骤23,根据多个单帧图像与模板匹配率计算得到匹配分数;Step 23, calculating the matching score according to the matching ratio of multiple single-frame images and the template;
步骤24,根据动作信息中的人做动作的时间与设定时间计算得到偏移分数;Step 24, calculate the offset score according to the time when the person in the action information does the action and the set time;
步骤25,根据偏移分数和匹配得分计算得到总得分;Step 25, calculating the total score according to the offset score and the matching score;
步骤30,在显示单元上显示与总分数相对应的匹配结果。Step 30, displaying the matching result corresponding to the total score on the display unit.
在该实施例中,每隔设定时间从动作信息中的动作图像中提取多个单帧图像,如单位时间内取一百帧图像;将一百帧图像与预先存储的模板匹配;判断一百帧图像的重合率,该种检测方式能够实现精准的检测,提高了产品的检测精度,从而提高了产品的体验效果;根据一百帧图像与模板的匹配率计算得到匹配分数(比如60分、70分、80分或90分等),对用户动作与模板中预存的模板;根据动作信息中的人做动作的时间与设定时间计算得到偏移分数,比如60分、70分、80分或90分等;根据偏移分数和匹配分数计算得到总得分,比如75分、80分、85分、90分、95分等。In this embodiment, multiple single-frame images are extracted from the action images in the action information every set time, such as taking one hundred frame images per unit time; matching one hundred frame images with pre-stored templates; judging one The overlap rate of 100 frames of images, this detection method can achieve accurate detection, improve the detection accuracy of the product, thereby improving the product experience effect; the matching score is calculated according to the matching rate of 100 frames of images and the template (for example, 60 points , 70 points, 80 points or 90 points, etc.), for user actions and pre-stored templates in the template; according to the time of the person doing the action in the action information and the set time, the offset score is calculated, such as 60 points, 70 points, 80 points points or 90 points, etc.; calculate the total score based on the offset score and matching score, such as 75 points, 80 points, 85 points, 90 points, 95 points, etc.
在本公开的一个实施例中,根据动作信息中的人做动作的时间与设定时间计算得到偏移分数的计算公式为:In one embodiment of the present disclosure, the formula for calculating the offset score is calculated according to the time when the person does the action in the action information and the set time:
ΔT=|T1-T2|;ΔT = |T 1 -T 2 |;
T偏为偏移分数;ΔT为偏移时间;T1为设定时间;X为随机常数,且X>0;T2为动作信息中的人做动作的时间。T bias is the offset score; ΔT is the offset time; T 1 is the set time; X is a random constant, and X>0; T 2 is the time when the person in the action information performs the action.
实施例四Embodiment Four
如图9所示,步骤21,每隔设定时间从动作信息中的动作图像中提取多个单帧图像;As shown in Figure 9, step 21, extracting a plurality of single-frame images from the action images in the action information every set time;
步骤22,将每个单帧图像与预先存储的模板匹配;Step 22, matching each single-frame image with a pre-stored template;
步骤23,根据多个单帧图像与模板匹配率计算得到匹配分数;Step 23, calculating the matching score according to the matching ratio of multiple single-frame images and the template;
步骤24,根据动作信息中的人做动作的时间与设定时间计算得到偏移分数;Step 24, calculate the offset score according to the time when the person in the action information does the action and the set time;
步骤25,提取预存的随机分数;Step 25, extracting the prestored random fraction;
步骤26,根据偏移分数、随机得分和匹配得分计算得到总得分。Step 26, calculate the total score according to the offset score, the random score and the matching score.
在本实施例中人机交互中的动作匹配结果确定方法包括:In this embodiment, the method for determining the action matching result in human-computer interaction includes:
步骤10,在显示单元上显示指示图像,采集人的动作信息;Step 10, displaying an instruction image on the display unit, and collecting motion information of the person;
步骤21,每隔设定时间从动作信息中的动作图像中提取多个单帧图像;Step 21, extracting a plurality of single-frame images from the action images in the action information every set time;
步骤22,将每个单帧图像与预先存储的模板匹配;Step 22, matching each single-frame image with a pre-stored template;
步骤23,根据多个单帧图像与模板匹配率计算得到匹配分数;Step 23, calculating the matching score according to the matching ratio of multiple single-frame images and the template;
步骤24,根据动作信息中的人做动作的时间与设定时间计算得到偏移分数;Step 24, calculate the offset score according to the time when the person in the action information does the action and the set time;
步骤25,提取预存的随机分数;Step 25, extracting the prestored random fraction;
步骤26,根据偏移分数、随机得分和匹配得分计算得到总得分;Step 26, calculating the total score according to the offset score, the random score and the matching score;
步骤30,在显示单元上显示与总分数相对应的匹配结果。Step 30, displaying the matching result corresponding to the total score on the display unit.
在该实施例中,每隔设定时间从动作信息中的动作图像中提取多个单帧图像,如单位时间内取一百帧图像;将一百帧图像与预先存储的模板匹配;判断一百帧图像的重合率,该种检测方式能够实现精准的检测,提高了产品的检测精度,从而提高了产品的体验效果;根据一百帧图像与模板的匹配率计算得到匹配分数,比如60分、70分、80分或90分等;根据动作信息中的人做动作的时间与设定时间计算得到偏移分数,比如60分、70分、80分或90分等;提取的预存的随机分数,比如60分、70分、80分或90分等;根据偏移分数、匹配分数和随机分数计算得到总得分,比如75分、80分、85分、90分、95分等。In this embodiment, multiple single-frame images are extracted from the action images in the action information every set time, such as taking one hundred frame images per unit time; matching one hundred frame images with pre-stored templates; judging one The overlap rate of 100 frames of images, this detection method can achieve accurate detection, improve the detection accuracy of the product, thereby improving the product experience effect; calculate the matching score based on the matching rate of 100 frames of images and templates, such as 60 points , 70 points, 80 points or 90 points, etc.; calculate the offset score based on the time of the person doing the action in the action information and the set time, such as 60 points, 70 points, 80 points or 90 points, etc.; the extracted pre-stored random Score, such as 60 points, 70 points, 80 points or 90 points, etc.; calculate the total score based on the offset score, matching score and random score, such as 75 points, 80 points, 85 points, 90 points, 95 points, etc.
在本公开的一个实施例中,总分数的计算公式为:In one embodiment of the present disclosure, the formula for calculating the total score is:
T总=N1*T匹+N2*T偏+N3*T随;T total = N 1 * T horse + N 2 * T partial + N 3 * T with ;
N1+N2+N3=100%;N 1 +N 2 +N 3 = 100%;
T总为总分数,T匹为匹配分数,T偏为偏移分数,T随为随机分数,N1、N2和N3为随机百分数,且N1>0,N2≥0,N3≥0。当总得分不包括偏移分数和/或随机分数时,N2和N3为0,即T总=N1*T匹+N3*T随N1+N3=100%;T总=N1*T匹+N2*T偏,N1+N2=100%;T总=N1*T匹,N1=100%;在本公开的一些实施例中,N1(90%)+N2(8%)+N3(2%)=100%,N1(80%)+N2(15%)+N3(5%)=100%,N1(95%)+N2(3%)+N3(2%)=100%N1(80%)+N2(0%)+N3(10%)=100%;(90%)+N2(10%)+N3(0%)=100%,(95%)+N2(5%)+N3(0%)=100%,(85%)+N2(15%)+N3(0%)=100%;(90%)+N2(0%)+N3(10%)=100%,(95%)+N2(0%)+N3(5%)=100%,(980%)+N2(0%)+N3(2%)=100%。本领域的技术人员应该理解,N1、N2和N3的组合方式有很多种,在再此就不一一列举了。T total is the total score, T match is the matching score, T bias is the offset score, T random is the random score, N 1 , N 2 and N 3 are random percentages, and N 1 > 0, N 2 ≥ 0, N 3 ≥0. When the total score does not include the offset score and/or random score, N 2 and N 3 are 0, that is, T total = N 1 * T match + N 3 * T with N 1 + N 3 = 100%; T total = N 1 * T +N 2 *T bias , N 1 +N 2 =100%; Ttotal =N 1 * T , N 1 =100%; in some embodiments of the present disclosure, N 1 (90% )+N 2 (8%)+N 3 (2%)=100%, N 1 (80%)+N 2 (15%)+N 3 (5%)=100%, N 1 (95%)+ N 2 (3%)+N 3 (2%)=100%N 1 (80%)+N 2 (0%)+N 3 (10%)=100%; (90%)+N 2 (10%) )+N 3 (0%)=100%, (95%)+N 2 (5%)+N 3 (0%)=100%, (85%)+N 2 (15%)+N 3 (0 %)=100%; (90%)+N 2 (0%)+N 3 (10%)=100%, (95%)+N 2 (0%)+N 3 (5%)=100%, (980%)+N 2 (0%)+N 3 (2%)=100%. Those skilled in the art should understand that there are many combinations of N 1 , N 2 and N 3 , which will not be listed here.
在本公开的一个实施例中,在显示单元上显示与总分数相对应的匹配结果具体包括:在显示单元上显示与总分数相对应的分数和/或动画,动画可为perfect、good、great、miss等数字动画,或者从在显示单元上显示下心雨、下星星雨等特效。本领域的技术人员应该理解,In one embodiment of the present disclosure, displaying the matching result corresponding to the total score on the display unit specifically includes: displaying the score and/or animation corresponding to the total score on the display unit, and the animation can be perfect, good, great , miss and other digital animations, or display special effects such as heart rain and star rain on the display unit. Those skilled in the art should understand that,
如图10所示,本公开第三方面的实施例提供的计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述任一项人机交互中的动作匹配结果确定方法的步骤。其中,计算机可读存储介质可以包括但不限于任何类型的盘,包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、静态随机访问存储器(SRAM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、软盘、光盘、DVD、CD-ROM、微型驱动器以及磁光盘、ROM、RAM、EPROM、EEPROM、DRAM、VRAM、闪速存储器设备、磁卡或光卡、纳米系统(包括分子存储器IC),或适合于存储指令和/或数据的任何类型的媒介或设备。在本公开的一个实施例中,计算机可读存储介质900其上存储有非暂时性计算机可读指令901。当所述非暂时性计算机可读指令901由处理器运行时,执行参照上述描述的根据本公开实施例的基于人体动态姿态的人机交互方法。As shown in FIG. 10 , the computer-readable storage medium provided by the embodiment of the third aspect of the present disclosure has a computer program stored thereon, and when the program is executed by a processor, the action matching result determination in any of the above-mentioned human-computer interactions can be realized. method steps. Among them, the computer-readable storage medium may include, but is not limited to, any type of disk, including flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), static random access memory (SRAM), electrically erasable Except for programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, floppy disk, compact disk, DVD, CD-ROM, microdrive and magneto-optical disk, ROM, RAM, EPROM, EEPROM, DRAM, VRAM, Flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of medium or device suitable for storing instructions and/or data. In one embodiment of the present disclosure, a computer readable storage medium 900 has non-transitory computer readable instructions 901 stored thereon. When the non-transitory computer-readable instructions 901 are executed by the processor, the human-computer interaction method based on the dynamic posture of the human body according to the embodiments of the present disclosure described above is executed.
本公开第四方面的实施例提供的人机交互装置,包括存储器、处理器及存储在存储器上并可在处理器上运行的程序,处理器执行程序时实现上述任一项人机交互中的动作匹配结果确定方法的步骤。The human-computer interaction device provided by the embodiment of the fourth aspect of the present disclosure includes a memory, a processor, and a program stored on the memory and operable on the processor. When the processor executes the program, any of the above human-computer interaction can be realized. The steps of the method for determining the action matching result.
在本公开的一个实施例中,存储器用于存储非暂时性计算机可读指令。具体地,存储器可以包括一个或多个计算机程序产品,计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。在本公开的一个实施例中,处理器可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其它形式的处理单元,并且可以控制人机交互装置中的其它组件以执行期望的功能。在本公开的一个实施例中,处理器用于运行存储器中存储的计算机可读指令,使得人机交互装置执行上述交互方法。In one embodiment of the present disclosure, memory is used to store non-transitory computer readable instructions. Specifically, the memory may include one or more computer program products, and the computer program products may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include random access memory (RAM) and/or cache memory (cache), etc., for example. Non-volatile memory may include, for example, read-only memory (ROM), hard disk, flash memory, and the like. In one embodiment of the present disclosure, the processor may be a central processing unit (CPU) or other forms of processing units with data processing capabilities and/or instruction execution capabilities, and may control other components in the human-computer interaction device to execute expected functionality. In one embodiment of the present disclosure, the processor is configured to execute computer-readable instructions stored in the memory, so that the human-computer interaction device executes the above-mentioned interaction method.
在本公开的一个实施例中,如图11所示,人机交互装置80包括存储器801和处理器802。人机交互装置80中的各组件通过总线系统和/或其它形式的连接机构(未示出)互连。In an embodiment of the present disclosure, as shown in FIG. 11 , a human-computer interaction device 80 includes a memory 801 and a processor 802 . The various components in the human-computer interaction device 80 are interconnected through a bus system and/or other forms of connection mechanisms (not shown).
存储器801用于存储非暂时性计算机可读指令。具体地,存储器801可以包括一个或多个计算机程序产品,计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。The memory 801 is used to store non-transitory computer readable instructions. Specifically, the memory 801 may include one or more computer program products, and the computer program products may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include random access memory (RAM) and/or cache memory (cache), etc., for example. Non-volatile memory may include, for example, read-only memory (ROM), hard disk, flash memory, and the like.
处理器802可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其它形式的处理单元,并且可以控制人机交互装置80中的其它组件以执行期望的功能。在本公开的一个实施例中,所述处理器802用于运行存储器801中存储的计算机可读指令,使得人机交互装置80执行上述基于人体动态姿态的人机交互方法。人机交互装置与上述基于人体动态姿态的人机交互方法描述的实施例相同,在此将省略其重复描述。Processor 802 may be a central processing unit (CPU) or other form of processing unit with data processing capabilities and/or instruction execution capabilities, and may control other components in human-computer interaction device 80 to perform desired functions. In one embodiment of the present disclosure, the processor 802 is configured to execute computer-readable instructions stored in the memory 801 , so that the human-computer interaction device 80 executes the above-mentioned human-computer interaction method based on the dynamic posture of the human body. The human-computer interaction device is the same as the above described embodiment of the human-computer interaction method based on the dynamic posture of the human body, and its repeated description will be omitted here.
在本公开的一个实施例中,人机交互装置为移动设备,移动设备的摄像头采集用户的图像,通过移动设备下载与指令相对应的歌曲和指示图像,歌曲和指示图像下载好后,移动设备的显示单元上出现识别框(该识别框可为人形框),通过调整用户与移动设备的距离,将用户的图像处于识别框内,移动设备开始播放音乐,同时显示单元上显示指示图像(可以显示为如亮点、星星、圆环等图形),用户开始做舞蹈动作,以使自己的肢体动作与这些指示图像匹配,根据用户的动作与指示图像的匹配程度,在显示单元上显示分数和/或动画(动画可为perfect、good、great、miss等数字动画,或者从在显示单元上显示下心雨、下星星雨等特效),音乐播放完成后,移动设备的显示单元上显示分数和等级,用户可将自己的跳舞视频下载下来或者分享出去或者进入排行榜,移动设备可为手机、平板电脑等。In one embodiment of the present disclosure, the human-computer interaction device is a mobile device. The camera of the mobile device collects images of the user, and downloads the song and instruction image corresponding to the instruction through the mobile device. After the song and the instruction image are downloaded, the mobile device A recognition frame (the recognition frame can be a human-shaped frame) appears on the display unit of the mobile device. By adjusting the distance between the user and the mobile device, the user's image is placed in the recognition frame. displayed as graphics such as bright spots, stars, rings, etc.), the user starts to do dance movements so that his body movements match these instruction images, and according to the matching degree of the user's actions with the instruction images, the score and/or points are displayed on the display unit Or animation (animation can be digital animations such as perfect, good, great, miss, or from displaying special effects such as heart rain and star rain on the display unit), after the music is played, the display unit of the mobile device will display scores and grades, Users can download or share their own dance videos or enter the ranking list, and the mobile devices can be mobile phones, tablet computers, etc.
在本公开中,术语“多个”则指两个或两个以上,除非另有明确的限定。术语“安装”、“相连”、“连接”、“固定”等术语均应做广义理解,例如,“连接”可以是固定连接,也可以是可拆卸连接,或一体地连接;“相连”可以是直接相连,也可以通过中间媒介间接相连。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本公开中的具体含义。In the present disclosure, the term "plurality" refers to two or more, unless otherwise clearly defined. The terms "installation", "connection", "connection", "fixed" and other terms should be interpreted in a broad sense, for example, "connection" can be fixed connection, detachable connection, or integral connection; "connection" can be directly or indirectly through an intermediary. Those of ordinary skill in the art can understand the specific meanings of the above terms in the present disclosure according to specific situations.
在本说明书的描述中,术语“一个实施例”、“一些实施例”、“具体实施例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或特点包含于本公开的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或实例。而且,描述的具体特征、结构、材料或特点可以在任何的一个或多个实施例或示例中以合适的方式结合。In the description of this specification, descriptions of the terms "one embodiment", "some embodiments", "specific embodiments" and the like mean that specific features, structures, materials or characteristics described in connection with the embodiment or example are included in the present disclosure In at least one embodiment or example of . In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the specific features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
以上所述仅为本公开的优选实施例而已,并不用于限制本公开,对于本领域的技术人员来说,本公开可以有各种更改和变化。凡在本公开的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。The above descriptions are only preferred embodiments of the present disclosure, and are not intended to limit the present disclosure. For those skilled in the art, the present disclosure may have various modifications and changes. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present disclosure shall be included within the protection scope of the present disclosure.
Claims (13)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN2018102744940 | 2018-03-29 | ||
| CN201810274494 | 2018-03-29 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN108563331A true CN108563331A (en) | 2018-09-21 |
Family
ID=63534298
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201810301244.1A Pending CN108563331A (en) | 2018-03-29 | 2018-04-04 | Act matching result determining device, method, readable storage medium storing program for executing and interactive device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN108563331A (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109985380A (en) * | 2019-04-09 | 2019-07-09 | 北京马尔马拉科技有限公司 | Internet gaming man-machine interaction method and system |
| CN111951626A (en) * | 2019-05-16 | 2020-11-17 | 上海流利说信息技术有限公司 | Language learning apparatus, method, medium, and computing device |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070239993A1 (en) * | 2006-03-17 | 2007-10-11 | The Trustees Of The University Of Pennsylvania | System and method for comparing similarity of computer programs |
| CN101306249A (en) * | 2008-05-30 | 2008-11-19 | 北京中星微电子有限公司 | Motion analysis device and method |
| CN104598867A (en) * | 2013-10-30 | 2015-05-06 | 中国艺术科技研究所 | Automatic evaluation method of human body action and dance scoring system |
| CN106228143A (en) * | 2016-08-02 | 2016-12-14 | 王国兴 | A kind of method that instructional video is marked with camera video motion contrast |
| CN107341612A (en) * | 2017-07-07 | 2017-11-10 | 上海理工大学 | A kind of action evaluation method of the rehabilitation training based on taijiquan |
| CN107349594A (en) * | 2017-08-31 | 2017-11-17 | 华中师范大学 | A kind of action evaluation method of virtual Dance System |
-
2018
- 2018-04-04 CN CN201810301244.1A patent/CN108563331A/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070239993A1 (en) * | 2006-03-17 | 2007-10-11 | The Trustees Of The University Of Pennsylvania | System and method for comparing similarity of computer programs |
| CN101306249A (en) * | 2008-05-30 | 2008-11-19 | 北京中星微电子有限公司 | Motion analysis device and method |
| CN104598867A (en) * | 2013-10-30 | 2015-05-06 | 中国艺术科技研究所 | Automatic evaluation method of human body action and dance scoring system |
| CN106228143A (en) * | 2016-08-02 | 2016-12-14 | 王国兴 | A kind of method that instructional video is marked with camera video motion contrast |
| CN107341612A (en) * | 2017-07-07 | 2017-11-10 | 上海理工大学 | A kind of action evaluation method of the rehabilitation training based on taijiquan |
| CN107349594A (en) * | 2017-08-31 | 2017-11-17 | 华中师范大学 | A kind of action evaluation method of virtual Dance System |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109985380A (en) * | 2019-04-09 | 2019-07-09 | 北京马尔马拉科技有限公司 | Internet gaming man-machine interaction method and system |
| CN111951626A (en) * | 2019-05-16 | 2020-11-17 | 上海流利说信息技术有限公司 | Language learning apparatus, method, medium, and computing device |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20220083216A1 (en) | Managing real-time handwriting recognition | |
| JP2025124752A (en) | Real-time handwriting recognition management | |
| US8549418B2 (en) | Projected display to enhance computer device use | |
| CN103197866B (en) | Information processing device, information processing method and program | |
| WO2019184633A1 (en) | Human-machine interaction system, method, computer readable storage medium, and interaction device | |
| CN108536293B (en) | Man-machine interaction system, man-machine interaction method, computer-readable storage medium and interaction device | |
| CN104731316B (en) | System and method for presenting information on a device based on eye tracking | |
| CN108829751B (en) | Method, device, electronic device and storage medium for generating and displaying lyrics | |
| WO2022116751A1 (en) | Interaction method and apparatus, and terminal, server and storage medium | |
| WO2017096509A1 (en) | Displaying and processing method, and related apparatuses | |
| US20170017838A1 (en) | Methods and systems for indexing multimedia content | |
| JP2021531589A (en) | Motion recognition method, device and electronic device for target | |
| CN107193571A (en) | Interface push method, mobile terminal and storage medium | |
| CN112827171A (en) | Interactive method, apparatus, electronic device and storage medium | |
| US10248306B1 (en) | Systems and methods for end-users to link objects from images with digital content | |
| US20250039537A1 (en) | Screenshot processing method, electronic device, and computer readable medium | |
| CN113920226B (en) | User interaction method, device, storage medium and electronic device | |
| CN106156794A (en) | Character recognition method based on writing style identification and device | |
| Nuzhdin et al. | HaGRIDv2: 1M images for static and dynamic hand gesture recognition | |
| CN108563331A (en) | Act matching result determining device, method, readable storage medium storing program for executing and interactive device | |
| CN108133209A (en) | Target area searching method and device in text recognition | |
| JP2016181042A (en) | Search device, method and program | |
| CN107025295A (en) | A kind of photo film making method and mobile terminal | |
| US20250008016A1 (en) | Multimedia messaging apparatuses and methods for sending multimedia messages | |
| CN111275683B (en) | Image quality grading processing method, system, device and medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180921 |