[go: up one dir, main page]

CN109164915B - Gesture recognition method, device, system and equipment - Google Patents

Gesture recognition method, device, system and equipment Download PDF

Info

Publication number
CN109164915B
CN109164915B CN201810941071.XA CN201810941071A CN109164915B CN 109164915 B CN109164915 B CN 109164915B CN 201810941071 A CN201810941071 A CN 201810941071A CN 109164915 B CN109164915 B CN 109164915B
Authority
CN
China
Prior art keywords
information
moment
input
preset
scattering centers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810941071.XA
Other languages
Chinese (zh)
Other versions
CN109164915A (en
Inventor
徐强
方有纲
刘耀中
刘耿烨
李跃星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Time Change Communication Technology Co Ltd
Original Assignee
Hunan Time Change Communication Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Time Change Communication Technology Co Ltd filed Critical Hunan Time Change Communication Technology Co Ltd
Priority to CN201810941071.XA priority Critical patent/CN109164915B/en
Publication of CN109164915A publication Critical patent/CN109164915A/en
Application granted granted Critical
Publication of CN109164915B publication Critical patent/CN109164915B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种手势识别方法、装置、系统和设备,其中方法包括:101、根据实时接收到的手势雷达发送的人体手部多个散射中心各自对应的坐标信息,构建当前时刻的人体手部对应的手部虚拟模型;102、循环执行步骤101,直到预设时间间隔内未接收到所述手势雷达发送的所述人体手部多个散射中心各自对应的坐标信息;103、确定不同时刻的所述手部虚拟模型上预置散射中心的坐标信息,并按照时间顺序连接不同时刻的所述预置散射中心的坐标信息,得到所述预置散射中心的坐标变化轨迹,所述预置散射中心为所述多个散射中心中的一个;104、根据所述坐标变化轨迹得到待输入信息,并根据所述待输入信息显示所述待输入信息对应的内容。

Figure 201810941071

The present application discloses a gesture recognition method, device, system and device, wherein the method includes: 101. According to coordinate information corresponding to multiple scattering centers of a human hand sent by gesture radar received in real time, construct a human hand at the current moment 102. Execute step 101 in a loop until the coordinate information corresponding to the multiple scattering centers of the human hand sent by the gesture radar is not received within a preset time interval; 103. Determine different times The coordinate information of the preset scattering center on the virtual hand model of the hand, and connect the coordinate information of the preset scattering center at different times in time sequence to obtain the coordinate change trajectory of the preset scattering center. The scattering center is one of the multiple scattering centers; 104. Obtain the information to be input according to the coordinate change trajectory, and display the content corresponding to the information to be input according to the information to be input.

Figure 201810941071

Description

一种手势识别方法、装置、系统和设备A gesture recognition method, device, system and device

技术领域technical field

本申请涉及手势识别技术领域,一种手势识别方法、装置、系统和设备。The present application relates to the technical field of gesture recognition, a gesture recognition method, apparatus, system and device.

背景技术Background technique

随着、移动终端设备的不断发展,以及虚拟现实设备的出现,人机交互变得越来越重要。手势识别作为其中一个重要的分支,具有与人的生活习惯相适应、自由度高等优点。With the continuous development of mobile terminal devices and the emergence of virtual reality devices, human-computer interaction has become more and more important. As one of the important branches, gesture recognition has the advantages of adapting to people's living habits and high degree of freedom.

传统的手势识别技术多是通过基于摄像头实现,然而这些手势识别技术均存在各种缺点:基于光学摄像头的交互方式由于需要获取不同景深的大量的图像数据,需要强大的数据处理能力才能提取出所需信息,这将极大占用硬件资源,同时,光学摄像头不能被遮挡,用户的隐私也存在泄漏的风险;红外摄像头的缺点类似光学摄像头,同时精度不如光学摄像头,且容易受到热源和强光源的干扰。因此,基于雷达的人机交互开始涌现,但目前基于雷达的技术的人机交互只支持简单手势的交互控制,且稳定性差、手势识别成功率低,不能进行复杂的交互,使得用户体验较差。Most of the traditional gesture recognition technologies are based on cameras. However, these gesture recognition technologies have various shortcomings: the interaction method based on optical cameras needs to obtain a large amount of image data with different depths of field, and requires strong data processing capabilities to extract all the data. It needs information, which will take up a lot of hardware resources. At the same time, the optical camera cannot be blocked, and the user's privacy is also at risk of leakage; the shortcomings of the infrared camera are similar to that of the optical camera. interference. Therefore, radar-based human-computer interaction began to emerge, but at present, the human-computer interaction based on radar technology only supports the interactive control of simple gestures, and the stability is poor, the success rate of gesture recognition is low, and complex interactions cannot be performed, which makes the user experience poor. .

发明内容SUMMARY OF THE INVENTION

本申请实施例提供了一种手势识别方法、装置、系统和设备,用于手势识别,解决了目前基于雷达技术的人机交互设备支持简单手势的交互控制,且稳定性差、手势识别成功率低,不能进行复杂的交互,使得用户体验较差的技术问题。The embodiments of the present application provide a gesture recognition method, device, system and device, which are used for gesture recognition, and solve the problem that the current human-computer interaction equipment based on radar technology supports the interaction control of simple gestures, and has poor stability and low success rate of gesture recognition. , can not carry out complex interaction, making the user experience poor technical problems.

有鉴于此,本申请第一方面提供了一种手势识别方法,包括:In view of this, a first aspect of the present application provides a gesture recognition method, including:

101、根据实时接收到的手势雷达发送的人体手部多个散射中心各自对应的坐标信息,构建当前时刻的人体手部对应的手部虚拟模型;101. Construct a hand virtual model corresponding to the human hand at the current moment according to the coordinate information corresponding to each of the multiple scattering centers of the human hand sent by the gesture radar received in real time;

106、循环执行步骤101,直到预设时间间隔内未接收到所述手势雷达发送的所述人体手部多个散射中心各自对应的坐标信息;106. Perform step 101 in a loop until the coordinate information corresponding to each of the multiple scattering centers of the human hand sent by the gesture radar is not received within a preset time interval;

103、确定不同时刻的所述手部虚拟模型上预置散射中心的坐标信息,并按照时间顺序连接不同时刻的所述预置散射中心的坐标信息,得到所述预置散射中心的坐标变化轨迹,所述预置散射中心为所述多个散射中心中的一个;103. Determine the coordinate information of the preset scattering centers on the hand virtual model at different times, and connect the coordinate information of the preset scattering centers at different times in a time sequence to obtain the coordinate change trajectory of the preset scattering centers , the preset scattering center is one of the multiple scattering centers;

104、根据所述坐标变化轨迹得到待输入信息,并根据所述待输入信息显示所述待输入信息对应的内容。104. Obtain information to be input according to the coordinate change trajectory, and display content corresponding to the information to be input according to the information to be input.

优选地,所述方法还包括:Preferably, the method further includes:

105、在接收所述手势雷达发送的所述多个散射中心各自对应的坐标信息的同时,接收所述手势雷达发送的所述多个散射中心各自对应的运动信息,所述运动信息包括:速度信息、加速度信息和方向信息;105. While receiving the coordinate information corresponding to each of the multiple scattering centers sent by the gesture radar, receive motion information corresponding to each of the multiple scattering centers sent by the gesture radar, where the motion information includes: velocity. information, acceleration information and orientation information;

所述步骤102具体为:The step 102 is specifically:

102、循环执行步骤101和105,直到预设时间间隔内未接收到所述手势雷达发送的所述人体手部多个散射中心各自对应的坐标信息;102. Repeat steps 101 and 105 until the coordinate information corresponding to each of the multiple scattering centers of the human hand sent by the gesture radar is not received within a preset time interval;

若第一时刻的所述手部虚拟模型未构建,则所述确定所述第一时刻的所述手部虚拟模型上预置散射中心的坐标信息具体包括:If the virtual hand model at the first moment has not been constructed, the determining the coordinate information of the preset scattering center on the virtual hand model at the first moment specifically includes:

确定第二时刻的所述手部虚拟模型上所述预置散射中心的运动信息,并根据所述第二时刻的运动信息,确定所述第一时刻的所述手部虚拟模型上预置散射中心的坐标信息,所述第二时刻为所述第一时刻的前一时刻,所述第一时刻为第一次执行步骤101的时刻至最后一次执行步骤101的时刻所组成的时间段内的一个时刻。determining the motion information of the preset scattering center on the hand virtual model at the second moment, and determining the preset scattering on the hand virtual model at the first moment according to the motion information at the second moment The coordinate information of the center, the second time is the previous time of the first time, and the first time is the time period formed from the time when step 101 is executed for the first time to the time when step 101 is executed for the last time. a moment.

优选地,所述待输入信息为待输入图像;Preferably, the information to be input is an image to be input;

所述步骤104具体包括:The step 104 specifically includes:

104、根据所述坐标变化轨迹得到所述待输入图像,并显示所述待输入图像。104. Obtain the to-be-input image according to the coordinate change trajectory, and display the to-be-input image.

优选地,所述待输入信息为待输入文字;Preferably, the information to be input is text to be input;

所述步骤104具体包括:The step 104 specifically includes:

104、根据所述坐标变化轨迹得到所述待输入文字,将所述待输入文字和预置文字库进行对比,得到所述待输入文字对应的文字,并显示所述文字。104. Obtain the to-be-input text according to the coordinate change trajectory, compare the to-be-input text with a preset text library, obtain a text corresponding to the to-be-input text, and display the text.

本申请第二方面提供一种手势识别装置,包括:A second aspect of the present application provides a gesture recognition device, including:

模型构建单元,用于根据实时接收到的手势雷达发送的人体手部多个散射中心各自对应的坐标信息,构建当前时刻的人体手部对应的手部虚拟模型;The model building unit is used for constructing a virtual hand model corresponding to the human hand at the current moment according to the coordinate information corresponding to the multiple scattering centers of the human hand sent by the gesture radar received in real time;

第一循环单元,用于反复触发所述模型构建单元,直到预设时间间隔内未接收到所述手势雷达发送的所述人体手部多个散射中心各自对应的坐标信息;a first circulation unit, configured to repeatedly trigger the model construction unit until the coordinate information corresponding to each of the multiple scattering centers of the human hand sent by the gesture radar is not received within a preset time interval;

轨迹确定单元,用于确定不同时刻的所述手部虚拟模型上预置散射中心的坐标信息,并按照时间顺序连接不同时刻的所述预置散射中心的坐标信息,得到所述预置散射中心的坐标变化轨迹,所述预置散射中心为所述多个散射中心中的一个;A trajectory determination unit, configured to determine the coordinate information of the preset scattering centers on the hand virtual model at different times, and connect the coordinate information of the preset scattering centers at different times in chronological order to obtain the preset scattering centers The coordinate change trajectory of , the preset scattering center is one of the plurality of scattering centers;

显示单元,用于根据所述坐标变化轨迹得到待输入信息,并根据所述待输入信息显示所述待输入信息对应的内容。The display unit is configured to obtain the information to be input according to the coordinate change trajectory, and display the content corresponding to the information to be input according to the information to be input.

优选地,所述装置还包括:Preferably, the device further comprises:

运动信息获取单元,用于在接收所述手势雷达发送的所述多个散射中心各自对应的坐标信息的同时,接收所述手势雷达发送的所述多个散射中心各自对应的运动信息,所述运动信息包括:速度信息、加速度信息和方向信息;A motion information acquisition unit, configured to receive the motion information corresponding to each of the plurality of scattering centers sent by the gesture radar while receiving the coordinate information corresponding to the plurality of scattering centers sent by the gesture radar, the Motion information includes: speed information, acceleration information and direction information;

第一循环单元,具体用于反复触发所述运动信息获取单元和所述模型构建单元,直到预设时间间隔内未接收到所述手势雷达发送的所述人体手部多个散射中心各自对应的坐标信息;The first circulation unit is specifically configured to repeatedly trigger the motion information acquisition unit and the model construction unit until the respective corresponding scattering centers of the human hand sent by the gesture radar are not received within a preset time interval. coordinate information;

若第一时刻的所述手部虚拟模型未构建,则所述确定所述第一时刻的所述手部虚拟模型上预置散射中心的坐标信息具体包括:If the virtual hand model at the first moment has not been constructed, the determining the coordinate information of the preset scattering center on the virtual hand model at the first moment specifically includes:

确定第二时刻的所述手部虚拟模型上所述预置散射中心的运动信息,并根据所述第二时刻的运动信息,确定所述第一时刻的所述手部虚拟模型上预置散射中心的坐标信息,所述第二时刻为所述第一时刻的前一时刻,所述第一时刻为第一次触发所述模型构建单元的时刻至最后一次触发所述模型构建单元的时刻所组成的时间段内的一个时刻。determining the motion information of the preset scattering center on the hand virtual model at the second moment, and determining the preset scattering on the hand virtual model at the first moment according to the motion information at the second moment The coordinate information of the center, the second time is the previous time of the first time, and the first time is the time from the time when the model construction unit is triggered for the first time to the time when the model construction unit is triggered for the last time. A moment in the composed time period.

优选地,所述待输入信息为待输入图像;Preferably, the information to be input is an image to be input;

所述显示单元具体用于,根据所述坐标变化轨迹得到所述待输入图像,并显示所述待输入图像。The display unit is specifically configured to obtain the to-be-input image according to the coordinate change trajectory, and to display the to-be-input image.

优选地,所述待输入信息为待输入文字;Preferably, the information to be input is text to be input;

所述显示单元具体用于,根据所述坐标变化轨迹得到所述待输入文字,将所述待输入文字和预置文字库进行对比,得到所述待输入文字对应的文字,并显示所述文字。The display unit is specifically configured to obtain the text to be input according to the coordinate change trajectory, compare the text to be input with a preset text library, obtain the text corresponding to the text to be input, and display the text .

本申请第三方面提供一种手势识别系统,包括:手势雷达和上述的手势识别装置;A third aspect of the present application provides a gesture recognition system, including: a gesture radar and the above gesture recognition device;

所述手势雷达和所述手势识别装置通信连接;the gesture radar is connected in communication with the gesture recognition device;

所述手势雷达,用于实时发射雷达信号至所述人体手部;The gesture radar is used to transmit radar signals to the human hand in real time;

所述手势雷达,还用于接收所述人体手部的多个散射中心对应反射的回波信号,根据各个所述回波信号解算出各个散射中心对应的坐标信息,并发送所述多个散射中心各自对应的坐标信息至所述手势识别装置。The gesture radar is further configured to receive echo signals reflected by multiple scattering centers of the human hand, calculate the coordinate information corresponding to each scattering center according to each of the echo signals, and send the multiple scattering centers The coordinate information corresponding to each center is sent to the gesture recognition device.

本申请第四方面提供一种手势识别设备,所述设备包括处理器以及存储器;A fourth aspect of the present application provides a gesture recognition device, the device includes a processor and a memory;

所述存储器用于存储程序代码,并将所述程序代码传输给所述处理器;the memory is used to store program code and transmit the program code to the processor;

所述处理器用于根据所述程序代码中的指令执行上述的手势识别方法。The processor is configured to execute the above gesture recognition method according to the instructions in the program code.

从以上技术方案可以看出,本申请实施例具有以下优点:As can be seen from the above technical solutions, the embodiments of the present application have the following advantages:

本申请实施例中,提供了一种手势识别方法,包括:因为手势雷达是实时发送人体手部多个散射中心各自对应的坐标信息,可以根据接收到的多个散射中心各自对应的坐标信息构建手部虚拟模型,直到预设时间间隔内未接收到手势雷达发送的人体手部多个散射中心各自对应的坐标信息,即用户的手势控制操作结束,在用户操作的整个过程中一个时刻对应有一个手部虚拟模型,然后根据不同时刻的手部虚拟模型上预置散射中心的坐标信息,确定预置散射中心的坐标变化轨迹,可以根据该坐标变化轨迹确定待输入信息,最后根据待输入信息显示袋鼠如信息对应的内容,整个识别过程中不管用户手部输入了什么内容,都可以准确地根据手部虚拟模型预置散射中心的坐标变化轨迹得到用户的输入内容,解决了目前基于雷达的技术的人机交互只支持简单手势的交互控制,且稳定性差、手势识别成功率低,不能进行复杂的交互,使得用户体验较差的技术问题。In the embodiment of the present application, a gesture recognition method is provided, including: because the gesture radar transmits the coordinate information corresponding to the multiple scattering centers of the human hand in real time, it can be constructed according to the received coordinate information corresponding to the multiple scattering centers. The hand virtual model does not receive the coordinate information corresponding to the multiple scattering centers of the human hand sent by the gesture radar within the preset time interval, that is, the user's gesture control operation ends, and there are corresponding A hand virtual model, and then determine the coordinate change trajectory of the preset scattering center according to the coordinate information of the preset scattering center on the hand virtual model at different times. The information to be input can be determined according to the coordinate change trajectory, and finally the information to be input is determined according to the coordinate change trajectory. Display the content corresponding to the information of kangaroos. No matter what the user's hand inputs during the whole recognition process, the user's input content can be obtained accurately according to the coordinate change trajectory of the preset scattering center of the hand virtual model, which solves the current radar-based problem. The human-computer interaction of technology only supports the interactive control of simple gestures, and the stability is poor, the success rate of gesture recognition is low, and complex interactions cannot be performed, which makes the user experience poor technical problems.

附图说明Description of drawings

图1为本申请实施例中一种手势识别方法的第一实施例的流程示意图;FIG. 1 is a schematic flowchart of a first embodiment of a gesture recognition method in an embodiment of the present application;

图2为本申请实施例中一种手势识别方法的第二实施例的流程示意图;FIG. 2 is a schematic flowchart of a second embodiment of a gesture recognition method in an embodiment of the present application;

图3为本申请实施例中一种手势识别装置的实施例的结构示意图;3 is a schematic structural diagram of an embodiment of a gesture recognition device in an embodiment of the present application;

图4为本申请实施例中一种手势识别系统的实施例的结构示意图。FIG. 4 is a schematic structural diagram of an embodiment of a gesture recognition system according to an embodiment of the present application.

具体实施方式Detailed ways

本申请实施例提供了一种手势识别方法、装置、系统和设备,用于手势识别,解决了目前基于雷达技术的人机交互设备只支持简单手势的交互控制,且稳定性差、手势识别成功率低,不能进行复杂的交互,使得用户体验较差的技术问题。The embodiments of the present application provide a gesture recognition method, device, system and device for gesture recognition, which solves the problem that the current human-computer interaction equipment based on radar technology only supports the interaction control of simple gestures, and has poor stability and a success rate of gesture recognition. Low, can not carry out complex interactions, making the user experience poor technical problems.

为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make those skilled in the art better understand the solutions of the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are only It is a part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.

请参阅图1,本申请实施例中一种手势识别方法的第一实施例的流程示意图,包括:Referring to FIG. 1, a schematic flowchart of a first embodiment of a gesture recognition method in an embodiment of the present application includes:

步骤101、根据实时接收到的手势雷达发送的人体手部多个散射中心各自对应的坐标信息,构建当前时刻的人体手部对应的手部虚拟模型。Step 101 , construct a hand virtual model corresponding to the human hand at the current moment according to the coordinate information corresponding to each of the multiple scattering centers of the human hand sent by the gesture radar received in real time.

需要说明的是,人体手部上不同的部位(如手指、手背等)对手势雷达的回波信号不同,即一个回波信号对应一个手部部位,根据实时接收到的手势雷达发送的人体手部多个散射中心各自对应的坐标信息,构建当前时刻的人体手部对应的手部虚拟模型。It should be noted that different parts of the human hand (such as fingers, back of the hand, etc.) have different echo signals to the gesture radar, that is, one echo signal corresponds to one hand part, according to the real-time received gesture radar. The coordinate information corresponding to each of the multiple scattering centers is used to construct a virtual hand model corresponding to the human hand at the current moment.

步骤102、循环执行步骤101,直到预设时间间隔内未接收到手势雷达发送的人体手部多个散射中心各自对应的坐标信息。Step 102: Step 101 is executed cyclically until the coordinate information corresponding to each of the multiple scattering centers of the human hand sent by the gesture radar is not received within the preset time interval.

需要说明的是,预设时间间隔内未接收到手势雷达发送的人体手部多个散射中心的坐标信息,即代表用户的手势控制操作结束。It should be noted that if the coordinate information of the multiple scattering centers of the human hand sent by the gesture radar is not received within the preset time interval, it means that the user's gesture control operation ends.

可以理解的是,预设时间间隔可以根据需要进行设置,此处不做具体限定。It can be understood that, the preset time interval can be set as required, which is not specifically limited here.

步骤103、确定不同时刻的手部虚拟模型上预置散射中心的坐标信息,并按照时间顺序连接不同时刻的预置散射中心的坐标信息,得到预置散射中心的坐标变化轨迹,预置散射中心为多个散射中心中的一个。Step 103: Determine the coordinate information of the preset scattering centers on the hand virtual model at different times, and connect the coordinate information of the preset scattering centers at different times in time sequence to obtain the coordinate change trajectory of the preset scattering centers, and the preset scattering centers is one of multiple scattering centers.

需要说明的是,因为实时建立的手部虚拟模型,因此确定不同时刻的手部虚拟模型上预置散射中心的坐标信息后,按照时间顺序连接不同时刻的预置散射中心的坐标信息,可以得到预置散射中心的坐标变化轨迹,即手部虚拟模型的运动变化轨迹,也即用户通过手势输入的坐标变化轨迹。可以理解的是,预置散射中心可以为手指对应的点,还可以理解的是,也可以设置别的部位对应的点为预置散射中心,具体地根据用户的输入习惯等确定。It should be noted that, because the hand virtual model is established in real time, after determining the coordinate information of the preset scattering centers on the hand virtual model at different times, connect the coordinate information of the preset scattering centers at different times in chronological order to obtain The coordinate change trajectory of the preset scattering center is the movement change trajectory of the hand virtual model, that is, the coordinate change trajectory input by the user through gestures. It is understandable that the preset scattering center may be a point corresponding to the finger, and it is also understandable that a point corresponding to another part may also be set as the preset scattering center, which is specifically determined according to the user's input habits and the like.

步骤104、根据坐标变化轨迹得到待输入信息,并根据待输入信息显示待输入信息对应的内容。Step 104: Obtain the information to be input according to the coordinate change trajectory, and display the content corresponding to the information to be input according to the information to be input.

本实施例中,因为手势雷达是实时发送人体手部多个散射中心各自对应的坐标信息,可以根据接收到的多个散射中心各自对应的坐标信息构建手部虚拟模型,直到预设时间间隔内未接收到手势雷达发送的人体手部多个散射中心各自对应的坐标信息,即用户的手势控制操作结束,在用户操作的整个过程中一个时刻对应有一个手部虚拟模型,然后根据不同时刻的手部虚拟模型上预置散射中心的坐标信息,确定预置散射中心的坐标变化轨迹,可以根据该坐标变化轨迹确定待输入信息,最后根据待输入信息显示袋鼠如信息对应的内容,整个识别过程中不管用户手部输入了什么内容,都可以准确地根据手部虚拟模型预置散射中心的坐标变化轨迹得到用户的输入内容,解决了目前基于雷达的技术的人机交互只支持简单手势的交互控制,且稳定性差、手势识别成功率低,不能进行复杂的交互,使得用户体验较差的技术问题。In this embodiment, since the gesture radar transmits the coordinate information corresponding to the multiple scattering centers of the human hand in real time, a virtual hand model can be constructed according to the received coordinate information corresponding to the multiple scattering centers until the preset time interval. The coordinate information corresponding to the multiple scattering centers of the human hand sent by the gesture radar has not been received, that is, the user's gesture control operation has ended. The coordinate information of the preset scattering center on the hand virtual model is used to determine the coordinate change trajectory of the preset scattering center. The information to be input can be determined according to the coordinate change trajectory. Finally, the content corresponding to the information of the kangaroo is displayed according to the information to be input. The entire identification process No matter what the user's hand inputs, the user's input content can be accurately obtained according to the coordinate change trajectory of the preset scattering center of the hand virtual model, which solves the problem that the current human-computer interaction based on radar technology only supports the interaction of simple gestures. Control, poor stability, low success rate of gesture recognition, and inability to perform complex interactions, making the user experience poor technical problems.

以上为本申请实施例提供的一种手势识别方法的第一实施例,以下为本申请实施例提供的一种手势识别方法的第二实施例。The above is a first embodiment of a gesture recognition method provided by an embodiment of the present application, and the following is a second embodiment of a gesture recognition method provided by an embodiment of the present application.

请参阅图2,本申请实施例中一种手势识别方法的第二实施例的流程示意图,包括:Please refer to FIG. 2 , a schematic flowchart of a second embodiment of a gesture recognition method in an embodiment of the present application, including:

步骤201、根据实时接收到的手势雷达发送的人体手部多个散射中心各自对应的坐标信息,构建当前时刻的人体手部对应的手部虚拟模型。Step 201 , construct a hand virtual model corresponding to the human hand at the current moment according to the coordinate information corresponding to each of the multiple scattering centers of the human hand sent by the gesture radar received in real time.

需要说明的是,步骤201与本申请第一实施例中步骤101的内容相同,具体描述可以参见第一实施例步骤101的内容,在此不再赘述。It should be noted that the content of step 201 is the same as that of step 101 in the first embodiment of the present application. For a specific description, please refer to the content of step 101 of the first embodiment, which will not be repeated here.

步骤202、在接收手势雷达发送的多个散射中心各自对应的坐标信息的同时,接收手势雷达发送的多个散射中心各自对应的运动信息,运动信息包括:速度信息、加速度信息和方向信息。Step 202: While receiving the coordinate information corresponding to each of the multiple scattering centers sent by the gesture radar, receive motion information corresponding to each of the multiple scattering centers sent by the gesture radar, where the motion information includes speed information, acceleration information and direction information.

需要说明的是,因手部各部位的反射都较弱,且由于人类是生物体的原因,各部位的反射强度很不稳定,容易丢失目标,从而导致手部模型未构建成功,从而当某一时刻丢失目标即未构建第一时刻的手部虚拟模型时,可以由前一时刻的运动信息,确定预置散射中心该时刻的坐标信息。It should be noted that because the reflection of each part of the hand is weak, and because human beings are living organisms, the reflection intensity of each part is very unstable, and it is easy to lose the target, resulting in the failure of the hand model to be constructed. When the target is lost at one moment, that is, the virtual model of the hand at the first moment is not constructed, the coordinate information of the preset scattering center at this moment can be determined from the motion information at the previous moment.

步骤203、循环执行步骤201和步骤202,直到预设时间间隔内未接收到手势雷达发送的人体手部多个散射中心各自对应的坐标信息。Step 203: Steps 201 and 202 are executed cyclically until the coordinate information corresponding to each of the multiple scattering centers of the human hand sent by the gesture radar is not received within a preset time interval.

步骤204、若第一时刻的手部虚拟模型未构建,确定第二时刻的手部虚拟模型上预置散射中心的运动信息,并根据第二时刻的运动信息,确定第一时刻的手部虚拟模型上预置散射中心的坐标信息,第二时刻为第一时刻的前一时刻,第一时刻为第一次执行步骤201的时刻至最后一次执行步骤201的时刻所组成的时间段内的一个时刻。Step 204: If the virtual hand model at the first moment has not been constructed, determine the motion information of the preset scattering center on the virtual hand model at the second moment, and determine the virtual hand at the first moment according to the motion information at the second moment. The coordinate information of the scattering center is preset on the model, the second time is the previous time of the first time, and the first time is a time period consisting of the time when step 201 is performed for the first time to the time when step 201 is performed for the last time. time.

需要说明的是,若第一时刻的手部虚拟模型未构建,因为预置散射中心的位置不会突然变化,会在上一时刻的位置附近,运动方向有较大概率沿之前的运动方向继续运动,运动速度也是逐渐变化的或不变,此时若知道相对第一时刻的前一时刻的速度信息、加速度信息和方向信息,确定预置散射中心第一时刻的坐标信息。预置散射中心为多个散射中心中的一个。It should be noted that if the virtual model of the hand at the first moment is not constructed, because the position of the preset scattering center will not change suddenly, it will be near the position of the previous moment, and the movement direction has a high probability to continue along the previous movement direction. The movement speed is also gradually changed or unchanged. At this time, if the speed information, acceleration information and direction information at the previous moment relative to the first moment are known, the coordinate information of the preset scattering center at the first moment is determined. The preset scattering center is one of a plurality of scattering centers.

可以理解的是,本实施例中仅仅描述根据第一时刻的前一时刻的运动信息确定第一时刻的坐标信息,此处也可以为根据第一时刻的后一时刻的运动信息确定第一时刻的坐标信息。具体地实施过程可以参见前述过程,在此并不赘述。It can be understood that this embodiment only describes the determination of the coordinate information of the first moment according to the motion information of the moment before the first moment, and the determination of the first moment according to the motion information of the next moment of the first moment can also be used here. coordinate information. For a specific implementation process, reference may be made to the foregoing process, which is not repeated here.

步骤205、确定不同时刻的手部虚拟模型上预置散射中心的坐标信息,按照时间顺序连接不同时刻的预置散射中心的坐标信息,得到预置散射中心的坐标变化轨迹。Step 205 : Determine the coordinate information of the preset scattering centers on the hand virtual model at different times, connect the coordinate information of the preset scattering centers at different times in time sequence, and obtain the coordinate change trajectory of the preset scattering centers.

需要说明的是,时间顺序为从前往后的时间顺序,但是可以理解的是,时间顺序也可以为从后往前的时间顺序。It should be noted that the time sequence is from front to back, but it can be understood that the time sequence may also be from back to front.

步骤206、根据坐标变化轨迹得到待输入信息,并根据待输入信息显示待输入信息对应的内容。Step 206: Obtain the information to be input according to the coordinate change trajectory, and display the content corresponding to the information to be input according to the information to be input.

需要说明的是,当待输入信息为待输入图像时,为了图像的多样化,可以直接根据坐标变化轨迹得到待输入图像,并显示待输入图像。可以理解的是,因为手部虚拟模型是3D的,此时对应的待输入图像可以是2D或者3D,具体的根据实际坐标变化轨迹确定。当待输入信息为待输入文字时,根据坐标变化轨迹得到待输入文字,将待输入文字和预置文字库进行对比,得到待输入文字对应的文字,并显示文字。可以理解的是,预置文字库中包括但不限于中文字符,英文字符和数字字符等。It should be noted that, when the information to be input is an image to be input, in order to diversify the image, the image to be input can be obtained directly according to the coordinate change trajectory, and the image to be input is displayed. It can be understood that because the virtual model of the hand is 3D, the corresponding image to be input at this time may be 2D or 3D, which is specifically determined according to the actual coordinate change trajectory. When the information to be input is the text to be input, the text to be input is obtained according to the coordinate change trajectory, the text to be input is compared with the preset text library, the text corresponding to the text to be input is obtained, and the text is displayed. It can be understood that the preset character library includes, but is not limited to, Chinese characters, English characters, numeric characters, and the like.

可以理解的是,在确定各个时刻的速度信息后,比如输入文字或图形时,速度快时对应较细的线条,速度较慢时对应较粗的线条,可以最大程度的还原输入内容。具体实现可以为,设定预置速度下对应的坐标变化轨迹的预置直径,然后根据不同速度与预置速度的比例设定该速度下的坐标变化轨迹。It can be understood that, after determining the speed information at each moment, such as inputting text or graphics, when the speed is fast, it corresponds to a thinner line, and when the speed is slow, it corresponds to a thicker line, which can restore the input content to the greatest extent. The specific implementation can be as follows: setting the preset diameter of the corresponding coordinate change trajectory at the preset speed, and then setting the coordinate change trajectory at the speed according to the ratio of different speeds to the preset speed.

本实施例中,因为手势雷达是实时发送人体手部多个散射中心各自对应的坐标信息,可以根据接收到的多个散射中心各自对应的坐标信息构建手部虚拟模型,直到预设时间间隔内未接收到手势雷达发送的人体手部多个散射中心各自对应的坐标信息,即用户的手势控制操作结束,在用户操作的整个过程中一个时刻对应有一个手部虚拟模型,然后根据不同时刻的手部虚拟模型上预置散射中心的坐标信息,确定预置散射中心的坐标变化轨迹,可以根据该坐标变化轨迹确定待输入信息,最后根据待输入信息显示袋鼠如信息对应的内容,整个识别过程中不管用户手部输入了什么内容,都可以准确地根据手部虚拟模型预置散射中心的坐标变化轨迹得到用户的输入内容,解决了目前基于雷达的技术的人机交互只支持简单手势的交互控制,且稳定性差、手势识别成功率低,不能进行复杂的交互,使得用户体验较差的技术问题。In this embodiment, since the gesture radar transmits the coordinate information corresponding to the multiple scattering centers of the human hand in real time, a virtual hand model can be constructed according to the received coordinate information corresponding to the multiple scattering centers until the preset time interval. The coordinate information corresponding to the multiple scattering centers of the human hand sent by the gesture radar has not been received, that is, the user's gesture control operation has ended. The coordinate information of the preset scattering center on the hand virtual model is used to determine the coordinate change trajectory of the preset scattering center. The information to be input can be determined according to the coordinate change trajectory. Finally, the content corresponding to the information of the kangaroo is displayed according to the information to be input. The entire identification process No matter what the user's hand inputs, the user's input content can be accurately obtained according to the coordinate change trajectory of the preset scattering center of the hand virtual model, which solves the problem that the current human-computer interaction based on radar technology only supports the interaction of simple gestures. Control, poor stability, low success rate of gesture recognition, and inability to perform complex interactions, making the user experience poor technical problems.

以上为本申请实施例提供的一种手势识别方法的第二实施例,以下为本申请实施例提供的一种手势识别装置的实施例。The above is the second embodiment of a gesture recognition method provided by an embodiment of the present application, and the following is an embodiment of a gesture recognition device provided by an embodiment of the present application.

请参阅图3,本申请实施例中一种手势识别装置的实施例的结构示意图,包括:Please refer to FIG. 3 , which is a schematic structural diagram of an embodiment of a gesture recognition device in an embodiment of the present application, including:

模型构建单元301,用于根据实时接收到的手势雷达发送的人体手部多个散射中心各自对应的坐标信息,构建当前时刻的人体手部对应的手部虚拟模型;The model building unit 301 is configured to construct a virtual hand model corresponding to the human hand at the current moment according to the coordinate information corresponding to the multiple scattering centers of the human hand sent by the gesture radar received in real time;

第一循环单元302,用于反复触发模型构建单元301,直到预设时间间隔内未接收到手势雷达发送的人体手部多个散射中心各自对应的坐标信息;The first circulation unit 302 is used to repeatedly trigger the model construction unit 301 until the coordinate information corresponding to the multiple scattering centers of the human hand sent by the gesture radar is not received within the preset time interval;

轨迹确定单元303,用于确定不同时刻的手部虚拟模型上预置散射中心的坐标信息,并按照时间顺序连接不同时刻的预置散射中心的坐标信息,得到预置散射中心的坐标变化轨迹,预置散射中心为多个散射中心中的一个;The trajectory determination unit 303 is used to determine the coordinate information of the preset scattering centers on the hand virtual model at different times, and connect the coordinate information of the preset scattering centers at different times in a time sequence to obtain the coordinate change trajectory of the preset scattering centers, The preset scattering center is one of multiple scattering centers;

显示单元304,用于根据坐标变化轨迹得到待输入信息,并根据待输入信息显示待输入信息对应的内容。The display unit 304 is configured to obtain the information to be input according to the coordinate change trajectory, and display the content corresponding to the information to be input according to the information to be input.

进一步地,该装置还包括:Further, the device also includes:

运动信息获取单元305,用于在接收手势雷达发送的多个散射中心各自对应的坐标信息的同时,接收手势雷达发送的多个散射中心各自对应的运动信息,运动信息包括:速度信息、加速度信息和方向信息;The motion information acquisition unit 305 is configured to receive the motion information corresponding to each of the plurality of scattering centers sent by the gesture radar while receiving the coordinate information corresponding to each of the plurality of scattering centers sent by the gesture radar, and the motion information includes: speed information, acceleration information and orientation information;

第一循环单元302,具体用于反复触发运动信息获取单元305和模型构建单元301,直到预设时间间隔内未接收到手势雷达发送的人体手部多个散射中心各自对应的坐标信息;The first circulation unit 302 is specifically used to repeatedly trigger the motion information acquisition unit 305 and the model construction unit 301, until the coordinate information corresponding to each of the multiple scattering centers of the human hand sent by the gesture radar is not received within the preset time interval;

若第一时刻的手部虚拟模型未构建,则确定第一时刻的手部虚拟模型上预置散射中心的坐标信息具体包括:If the virtual hand model at the first moment has not been constructed, determining the coordinate information of the preset scattering center on the virtual hand model at the first moment specifically includes:

确定第二时刻的手部虚拟模型上预置散射中心的运动信息,并根据第二时刻的运动信息,确定第一时刻的手部虚拟模型上预置散射中心的坐标信息,第二时刻为第一时刻的前一时刻,第一时刻为第一次触发模型构建单元的时刻至最后一次触发模型构建单元的时刻所组成的时间段内的一个时刻。Determine the motion information of the preset scattering center on the hand virtual model at the second moment, and determine the coordinate information of the preset scattering center on the hand virtual model at the first moment according to the motion information at the second moment, and the second moment is the first time. The moment before a moment, the first moment is a moment in the time period formed from the moment when the model construction unit is triggered for the first time to the moment when the model construction unit is triggered for the last time.

进一步地,待输入信息为待输入图像;Further, the information to be input is an image to be input;

显示单元304具体用于,根据坐标变化轨迹得到待输入图像,并显示待输入图像。The display unit 304 is specifically configured to obtain the to-be-input image according to the coordinate change trajectory, and to display the to-be-input image.

进一步地,待输入信息为待输入文字;Further, the information to be input is the text to be input;

显示单元304具体用于,根据坐标变化轨迹得到待输入文字,将待输入文字和预置文字库进行对比,得到待输入文字对应的文字,并显示文字。可以理解的是,预置文字库中包括但不限于中文字符,英文字符和数字字符等。The display unit 304 is specifically configured to obtain the text to be input according to the coordinate change track, compare the text to be input with a preset text library, obtain the text corresponding to the text to be input, and display the text. It can be understood that the preset character library includes, but is not limited to, Chinese characters, English characters, numeric characters, and the like.

以上为本申请实施例提供的一种手势识别装置的实施例,以下为本申请实施例提供的一种手势识别系统的实施例。The above is an embodiment of a gesture recognition apparatus provided by an embodiment of the present application, and the following is an embodiment of a gesture recognition system provided by an embodiment of the present application.

请参阅图4,本申请实施例中一种手势识别系统的结构示意图,包括:手势雷达401和上述实施例三的手势识别装置402;Please refer to FIG. 4 , which is a schematic structural diagram of a gesture recognition system in an embodiment of the present application, including: a gesture radar 401 and the gesture recognition device 402 in the third embodiment;

手势雷达401和手势识别装置402通信连接;The gesture radar 401 is connected in communication with the gesture recognition device 402;

手势雷达401,用于实时发射雷达信号至人体手部;Gesture radar 401, used to transmit radar signals to human hands in real time;

手势雷达401,还用于接收人体手部的多个散射中心对应反射的回波信号,根据各个回波信号解算出各个散射中心对应的坐标信息,并发送多个散射中心各自对应的坐标信息至手势识别装置402。The gesture radar 401 is also used for receiving echo signals corresponding to the reflections of multiple scattering centers of the human hand, calculating the coordinate information corresponding to each scattering center according to each echo signal, and sending the respective coordinate information corresponding to the multiple scattering centers to Gesture recognition device 402 .

需要说明的是,手势雷达401采用线性调频连续波雷达信号,工作在V波段(55GHz-65GHz)。线性调频连续波雷达的基本原理是将发送信号和回波信号进行混频获得差频信号,该差频信号进行处理分析即可得到相对距离、相对角度和相对速度,根据相对距离和相对角度可以获得坐标信息。可以理解的是,手势雷达401是一个模块,可以直接安装在手势识别装置里,比如电视机、手机里;也可以制作成一个外接模块,通过USB与手势识别装置连接。同时,可以理解的是,人员操作时不需要佩戴任何物品,裸手操作即可,只需要在手势雷达401的可检测范围内即可。可以理解的是,为了获得目标的角度信息,可以设置手势雷达有多个接收通道。It should be noted that the gesture radar 401 uses a linear frequency modulated continuous wave radar signal, and works in the V-band (55GHz-65GHz). The basic principle of chirp continuous wave radar is to mix the transmitted signal and the echo signal to obtain a difference frequency signal, and the difference frequency signal can be processed and analyzed to obtain the relative distance, relative angle and relative speed. Get coordinate information. It can be understood that the gesture radar 401 is a module, which can be directly installed in a gesture recognition device, such as a TV set or a mobile phone; it can also be made into an external module and connected to the gesture recognition device through USB. At the same time, it can be understood that the personnel do not need to wear anything when operating, and they only need to operate with bare hands, and only need to be within the detectable range of the gesture radar 401 . It can be understood that, in order to obtain the angle information of the target, the gesture radar can be set to have multiple receiving channels.

本申请实施例还提供了一种手势识别设备,该设备包括处理器以及存储器:存储器用于存储程序代码,并将程序代码传输给处理器,处理器用于根据程序代码中的指令执行前述各个实施例的手势识别方法,从而执行的各种功能应用以及数据处理。An embodiment of the present application also provides a gesture recognition device, which includes a processor and a memory: the memory is used to store program codes, and transmit the program codes to the processor, and the processor is used to execute the foregoing implementations according to instructions in the program codes Examples of gesture recognition methods, thereby performing various functional applications as well as data processing.

所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, the specific working process of the system, device and unit described above may refer to the corresponding process in the foregoing method embodiments, which will not be repeated here.

本申请的说明书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例例如能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。The terms "first", "second", "third", "fourth", etc. (if any) in the description of the present application and the above-mentioned drawings are used to distinguish similar objects and are not necessarily used to describe a specific order or sequence. It is to be understood that the data so used may be interchanged under appropriate circumstances such that the embodiments of the application described herein can, for example, be practiced in sequences other than those illustrated or described herein. Furthermore, the terms "comprising" and "having" and any variations thereof, are intended to cover non-exclusive inclusion, for example, a process, method, system, product or device comprising a series of steps or units is not necessarily limited to those expressly listed Rather, those steps or units may include other steps or units not expressly listed or inherent to these processes, methods, products or devices.

应当理解,在本申请中,“至少一个(项)”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,用于描述关联对象的关联关系,表示可以存在三种关系,例如,“A和/或B”可以表示:只存在A,只存在B以及同时存在A和B三种情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b或c中的至少一项(个),可以表示:a,b,c,“a和b”,“a和c”,“b和c”,或“a和b和c”,其中a,b,c可以是单个,也可以是多个。It should be understood that, in this application, "at least one (item)" refers to one or more, and "a plurality" refers to two or more. "And/or" is used to describe the relationship between related objects, indicating that there can be three kinds of relationships, for example, "A and/or B" can mean: only A, only B, and both A and B exist , where A and B can be singular or plural. The character "/" generally indicates that the associated objects are an "or" relationship. "At least one item(s) below" or similar expressions thereof refer to any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (a) of a, b or c, can mean: a, b, c, "a and b", "a and c", "b and c", or "a and b and c" ", where a, b, c can be single or multiple.

在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.

所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.

另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit. The above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.

所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一台设备(可以是个人,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(英文全称:Read-Only Memory,英文缩写:ROM)、随机存取存储器(英文全称:Random Access Memory,英文缩写:RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a readable storage medium. Based on this understanding, the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art, or all or part of the technical solution, and the software product is stored in a storage medium, Several instructions are included to cause a device (which may be a person, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (full English name: Read-Only Memory, English abbreviation: ROM), random access memory (English full name: Random Access Memory, English abbreviation: RAM), magnetic Various media that can store program codes, such as discs or optical discs.

以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。As mentioned above, the above embodiments are only used to illustrate the technical solutions of the present application, but not to limit them; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand: The technical solutions described in the embodiments are modified, or some technical features thereof are equivalently replaced; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions in the embodiments of the present application.

Claims (8)

1. A gesture recognition method, comprising:
101. according to the coordinate information which is sent by the gesture radar and corresponds to the plurality of scattering centers of the human hand and received in real time, a hand virtual model corresponding to the human hand at the current moment is built;
102. step 101 is executed in a circulating manner until coordinate information which is sent by the gesture radar and corresponds to each of the plurality of scattering centers of the human hand is not received within a preset time interval;
103. determining coordinate information of preset scattering centers on the hand virtual model at different moments, and connecting the coordinate information of the preset scattering centers at different moments according to a time sequence to obtain a coordinate change track of the preset scattering centers, wherein the preset scattering centers are one of the scattering centers;
104. obtaining information to be input according to the coordinate change track, and displaying the content corresponding to the information to be input according to the information to be input;
the method further comprises the following steps: 105;
105. receiving, while receiving the coordinate information, sent by the gesture radar, corresponding to each of the plurality of scattering centers, motion information, sent by the gesture radar, corresponding to each of the plurality of scattering centers, where the motion information includes: speed information, acceleration information, and direction information;
the step 102 specifically includes:
102. circularly executing the steps 101 and 105 until coordinate information which is sent by the gesture radar and corresponds to each of the plurality of scattering centers of the human hand is not received within a preset time interval;
if the virtual hand model at the first moment is not constructed, determining coordinate information of a preset scattering center on the virtual hand model at the first moment, specifically comprising:
and determining motion information of the preset scattering center on the virtual hand model at a second moment, and determining coordinate information of the preset scattering center on the virtual hand model at the first moment according to the motion information at the second moment, wherein the second moment is a previous moment of the first moment, and the first moment is a moment in a time period from the moment of executing the step 101 for the first time to the moment of executing the step 101 for the last time.
2. The method according to claim 1, wherein the information to be input is an image to be input;
the step 104 specifically includes:
104. and obtaining the image to be input according to the coordinate change track, and displaying the image to be input.
3. The method according to claim 1, wherein the information to be input is a text to be input;
the step 104 specifically includes:
104. and obtaining the characters to be input according to the coordinate change track, comparing the characters to be input with a preset character library to obtain characters corresponding to the characters to be input, and displaying the characters.
4. A gesture recognition apparatus, comprising:
the model building unit is used for building a hand virtual model corresponding to the human hand at the current moment according to the real-time received coordinate information corresponding to the plurality of scattering centers of the human hand and sent by the gesture radar;
the first circulation unit is used for repeatedly triggering the model construction unit until coordinate information which is sent by the gesture radar and corresponds to each of the plurality of scattering centers of the human hand is not received within a preset time interval;
the trajectory determining unit is used for determining coordinate information of preset scattering centers on the hand virtual model at different moments, and connecting the coordinate information of the preset scattering centers at different moments according to a time sequence to obtain a coordinate change trajectory of the preset scattering centers, wherein the preset scattering center is one of the scattering centers;
the display unit is used for obtaining information to be input according to the coordinate change track and displaying the content corresponding to the information to be input according to the information to be input;
the device further comprises:
a motion information obtaining unit, configured to receive, while receiving the coordinate information, sent by the gesture radar, corresponding to each of the multiple scattering centers, motion information, sent by the gesture radar, corresponding to each of the multiple scattering centers, where the motion information includes: speed information, acceleration information, and direction information;
the first circulation unit is specifically used for repeatedly triggering the motion information acquisition unit and the model construction unit until coordinate information which is sent by the gesture radar and corresponds to the plurality of scattering centers of the human hand is not received within a preset time interval; if the virtual hand model at the first moment is not constructed, determining coordinate information of a preset scattering center on the virtual hand model at the first moment, specifically comprising:
and determining the motion information of the preset scattering center on the hand virtual model at a second moment, and determining the coordinate information of the preset scattering center on the hand virtual model at the first moment according to the motion information at the second moment, wherein the second moment is a previous moment of the first moment, and the first moment is a moment in a time period formed by a moment from the moment of triggering the model construction unit for the first time to the moment of triggering the model construction unit for the last time.
5. The apparatus according to claim 4, wherein the information to be input is an image to be input;
the display unit is specifically configured to obtain the image to be input according to the coordinate change trajectory and display the image to be input.
6. The device of claim 4, wherein the information to be input is a text to be input;
the display unit is specifically configured to obtain the characters to be input according to the coordinate change trajectory, compare the characters to be input with a preset character library to obtain characters corresponding to the characters to be input, and display the characters.
7. A gesture recognition system, comprising: a gesture radar and a gesture recognition apparatus according to any one of the preceding claims 4 to 6;
the gesture radar is in communication connection with the gesture recognition device; the gesture radar is used for transmitting a radar signal to the hand of the human body in real time; the gesture radar is further configured to receive echo signals reflected by a plurality of scattering centers of the human hand correspondingly, calculate coordinate information corresponding to each scattering center according to each echo signal, and send the coordinate information corresponding to each scattering center to the gesture recognition device.
8. A gesture recognition device, the device comprising a processor and a memory;
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to perform the gesture recognition method of any one of claims 1 to 3 according to instructions in the program code.
CN201810941071.XA 2018-08-17 2018-08-17 Gesture recognition method, device, system and equipment Active CN109164915B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810941071.XA CN109164915B (en) 2018-08-17 2018-08-17 Gesture recognition method, device, system and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810941071.XA CN109164915B (en) 2018-08-17 2018-08-17 Gesture recognition method, device, system and equipment

Publications (2)

Publication Number Publication Date
CN109164915A CN109164915A (en) 2019-01-08
CN109164915B true CN109164915B (en) 2020-03-17

Family

ID=64895863

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810941071.XA Active CN109164915B (en) 2018-08-17 2018-08-17 Gesture recognition method, device, system and equipment

Country Status (1)

Country Link
CN (1) CN109164915B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113253832B (en) * 2020-02-13 2023-10-13 Oppo广东移动通信有限公司 Gesture recognition method, gesture recognition device, terminal and computer readable storage medium
CN111562843A (en) * 2020-04-29 2020-08-21 广州美术学院 Positioning method, device, equipment and storage medium for gesture capture
CN111624572B (en) 2020-05-26 2023-07-18 京东方科技集团股份有限公司 Human hand and human gesture recognition method and device
CN113918004B (en) * 2020-07-10 2024-10-11 华为技术有限公司 Gesture recognition method, device, medium and system thereof
CN112885431B (en) * 2021-01-13 2023-09-05 佛山市顺德区美的洗涤电器制造有限公司 Diet recommendation method and device, range hood, processor and storage medium
CN114245542B (en) * 2021-12-17 2024-03-22 深圳市恒佳盛电子有限公司 Radar sensor light and control method thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105786185A (en) * 2016-03-12 2016-07-20 浙江大学 Non-contact type gesture recognition system and method based on continuous-wave micro-Doppler radar
CN107024685A (en) * 2017-04-10 2017-08-08 北京航空航天大学 A kind of gesture identification method based on apart from velocity characteristic

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140324888A1 (en) * 2011-12-09 2014-10-30 Nokia Corporation Method and Apparatus for Identifying a Gesture Based Upon Fusion of Multiple Sensor Signals
US20140046922A1 (en) * 2012-08-08 2014-02-13 Microsoft Corporation Search user interface using outward physical expressions
CN106527670A (en) * 2015-09-09 2017-03-22 广州杰赛科技股份有限公司 Hand gesture interaction device
CN105677019B (en) * 2015-12-29 2018-11-16 大连楼兰科技股份有限公司 A gesture recognition sensor and its working method
CN106980362A (en) * 2016-10-09 2017-07-25 阿里巴巴集团控股有限公司 Input method and device based on virtual reality scenario
CN108344995A (en) * 2018-01-25 2018-07-31 宁波隔空智能科技有限公司 A kind of gesture identifying device and gesture identification method based on microwave radar technology

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105786185A (en) * 2016-03-12 2016-07-20 浙江大学 Non-contact type gesture recognition system and method based on continuous-wave micro-Doppler radar
CN107024685A (en) * 2017-04-10 2017-08-08 北京航空航天大学 A kind of gesture identification method based on apart from velocity characteristic

Also Published As

Publication number Publication date
CN109164915A (en) 2019-01-08

Similar Documents

Publication Publication Date Title
CN109164915B (en) Gesture recognition method, device, system and equipment
Liu et al. Real-time arm gesture recognition in smart home scenarios via millimeter wave sensing
US11054912B2 (en) Three-dimensional graphical user interface for informational input in virtual reality environment
US11592909B2 (en) Fine-motion virtual-reality or augmented-reality control using radar
Hayashi et al. RadarNet: Efficient gesture recognition technique utilizing a miniature radar sensor
US11875012B2 (en) Throwable interface for augmented reality and virtual reality environments
Sang et al. Micro hand gesture recognition system using ultrasonic active sensing
EP2702566B1 (en) Inferring spatial object descriptions from spatial gestures
US10286308B2 (en) Controller visualization in virtual and augmented reality environments
Carter et al. Pathsync: Multi-user gestural interaction with touchless rhythmic path mimicry
JP5563709B2 (en) System and method for facilitating interaction with virtual space via a touch-sensitive surface
CN110585731B (en) Method, device, terminal and medium for throwing virtual article in virtual environment
US10386481B1 (en) Angle-of-arrival-based gesture recognition system and method
CN107430443A (en) Gesture identification based on wide field radar
EP4006847A1 (en) Virtual object processing method and apparatus, and storage medium and electronic device
CN103631382A (en) Laser projection virtual keyboard
CN107992251A (en) Technical ability control method, device, electronic equipment and storage medium
CN107066081B (en) An interactive control method and device for a virtual reality system and virtual reality equipment
CN106778565B (en) Pull-up counting method and device
US20200272228A1 (en) Interaction system of three-dimensional space and method for operating same
EP2943852A2 (en) Methods and systems for controlling a virtual interactive surface and interactive display systems
CN115047976A (en) Multi-level AR display method and device based on user interaction and electronic equipment
TWI825004B (en) Input methods, devices, equipment, systems and computer storage media
CN107977071B (en) An operating method and device suitable for a space system
CN112528766B (en) Lip language identification method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Xu Qiang

Inventor after: Fang Yougang

Inventor after: Liu Yaozhong

Inventor after: Liu Gengye

Inventor after: Li Yuexing

Inventor after: Wang Anqi

Inventor before: Xu Qiang

Inventor before: Fang Yougang

Inventor before: Liu Yaozhong

Inventor before: Liu Gengye

Inventor before: Li Yuexing

CB03 Change of inventor or designer information
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A gesture recognition method, device, system and equipment

Effective date of registration: 20210908

Granted publication date: 20200317

Pledgee: China Everbright Bank Co.,Ltd. Xiangtan sub branch

Pledgor: TIME VARYING TRANSMISSION Co.,Ltd.

Registration number: Y2021430000044

PE01 Entry into force of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Granted publication date: 20200317

Pledgee: China Everbright Bank Co.,Ltd. Xiangtan sub branch

Pledgor: TIME VARYING TRANSMISSION Co.,Ltd.

Registration number: Y2021430000044

PC01 Cancellation of the registration of the contract for pledge of patent right