[go: up one dir, main page]

CN106325509A - Three-dimensional gesture recognition method and system - Google Patents

Three-dimensional gesture recognition method and system Download PDF

Info

Publication number
CN106325509A
CN106325509A CN201610694390.6A CN201610694390A CN106325509A CN 106325509 A CN106325509 A CN 106325509A CN 201610694390 A CN201610694390 A CN 201610694390A CN 106325509 A CN106325509 A CN 106325509A
Authority
CN
China
Prior art keywords
hand
user
information
dimensional
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610694390.6A
Other languages
Chinese (zh)
Inventor
伊威
古鉴
方维
杨婷
马宝庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Storm Mirror Technology Co Ltd
Original Assignee
Beijing Storm Mirror Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Storm Mirror Technology Co Ltd filed Critical Beijing Storm Mirror Technology Co Ltd
Priority to CN201610694390.6A priority Critical patent/CN106325509A/en
Publication of CN106325509A publication Critical patent/CN106325509A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Architecture (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a three-dimensional gesture recognition method and system. The three-dimensional gesture recognition method comprises the steps: acquiring first three-dimensional position information on a hand of a user, wherein the first three-dimensional position information is position information on a first position point of the hand of the user; carrying out predicting calculation on the first three-dimensional position information by using a predefined gesture prediction algorithm so as to obtain second three-dimensional position information and attitude information on the hand of the user, wherein the second three-dimensional position information is position information on a second position point of the hand of the user; inputting the second three-dimensional position information and the attitude information to a pre-modeled three-dimensional hand model, thereby obtaining a three-dimensional gesture corresponding to the hand of the user. According to the technical scheme disclosed by embodiments of the invention, the simulation of real actions of hands in the fields of virtual reality and augment reality is achieved, and thus the effects of recognizing and interacting three-dimensional gestures are achieved.

Description

Three-dimensional gesture recognition method and system
Technical field
The disclosure relates generally to field of computer technology, is specifically related to virtual reality and augmented reality field, especially Relate to a kind of three-dimensional gesture recognition method and system.
Background technology
Gesture interaction can provide the user the possibility of natural interaction under different scenes, and it is widely used in game, premises Produce, educate, travel, the various fields such as video display, user, without dressing any equipment, just can realize as staff is with natural world The same interactive action.Meanwhile, the man-machine interaction skill that this technology is the most key in being virtual reality and augmented reality application One of art, is to realize more preferable interactive experience or the basis of increasingly complex function.By gesture interaction technology, can greatly increase The strong user sense of reality when using virtual reality (VR)/augmented reality (AR) equipment and feeling of immersion.Currently, precisely catch, low Time delay, low-power consumption, be easy to carry, the gesture interaction system of low cost be this area research development emphasis direction.
From mutual, gesture is as being a kind of input pattern, and it obtains simulation hand by relevant external equipment and moves The output made.Man-machine interaction refer to the interaction mode between people and machine, this interaction mode experienced by mouse, physical hardware, Screen touch-control, the process progressively developed of remote somatosensory operation.Traditional gesture interaction mode is specific as follows:
1) mouse, light target trace simulation gesture interaction are utilized.Mouse lower left and right on a display screen is held sliding by hand Dynamic, carry out approximate simulation hand azimuth motion.Program shortcoming is, the action of mouse is the most single, only two dimension and there is no three-dimensional Information, it is impossible to the realistic operation of simulation hand.
2) use touch pad singly to refer to or the various gestures such as many fingers is mutual.Such as, the touch pad that notebook is external is used Equipment, by the slip singly referring to or referring to more, the azimuth motion of approximate simulation hand.This kind of method and the gesture interaction of cursor of mouse Identical, it is impossible to the realistic operation of simulation hand.
3) gesture interaction on touch screen.Mobile terminal (flat board, mobile phone) uses the gesture interaction of touch screen, mainly has length By, touch, slide, drag, rotate, scale, shake this eight kinds of gestures, its advantage is the increase in transportability, simple analog Gesture interaction action, its shortcoming is that gesture interaction action is the most single, it is impossible to the realistic operation of simulation hand.
As can be seen here, current gesture interaction mode major part cannot simulate the realistic operation of hand completely, and cannot Apply in virtual reality and augmented reality field.But, for this problem, prior art does not provide a kind of effective solution Certainly scheme.
Summary of the invention
In view of drawbacks described above of the prior art or deficiency, it is desirable to provide one can lead in virtual reality and augmented reality Territory is simulated hand realistic operation, thus realizes between portable intelligent mobile device and virtual reality/augmented reality equipment The technical scheme that gesture interaction is real.
First aspect, this application provides a kind of three-dimensional gesture recognition method, and described method includes: obtain user's hand First three dimensional local information, described first three dimensional local information is the positional information of primary importance point on user's hand;Use pre- Described first three dimensional local information is predicted calculating by the gesture prediction algorithm first defined, and obtains the second three-dimensional of user's hand Positional information and attitude information, described second three dimensional local information is the positional information of second position point on user's hand;And Described second three dimensional local information and described attitude information are inputted the three-dimensional hand model built in advance, obtains user's hand pair The three-dimension gesture answered.
Second aspect, this application provides a kind of three-dimension gesture identification system, including Wearable and terminal unit, institute State Wearable to include: for being placed in the head fixed structure of user's head;Depth transducer, be used for obtaining user's hand and The deep image information of surrounding;Interface transmission structure, is arranged on described depth transducer, by described interface transmission knot Structure, described deep image information can be sent to be arranged in described Wearable by dismountable by described depth transducer Terminal unit;Described terminal unit includes: extraction module, for according to preset hand shape and depth characteristic value, from institute State and deep image information extracts described first three dimensional local information;Computing module, is used for using predefined gesture pre- Described first three dimensional local information is predicted calculating by method of determining and calculating, obtains the second three dimensional local information and the attitude of user's hand Information, described second three dimensional local information is the positional information of second position point on user's hand;Processing module, for by described Second three dimensional local information and described attitude information input the three-dimensional hand model built in advance, obtain that user's hand is corresponding three Dimension gesture.
The technical scheme provided according to the embodiment of the present application, by first being obtained the three-dimensional position of user's hand by depth transducer Confidence ceases, then by terminal unit according to the gesture prediction algorithm pre-set and the three-dimensional hand model built in advance to user's hands The three dimensional local information in portion processes, thus obtains the three-dimension gesture that user's hand is corresponding, has been finally reached in virtual reality With the effect simulating hand realistic operation in augmented reality field.
Accompanying drawing explanation
By the detailed description that non-limiting example is made made with reference to the following drawings of reading, other of the application Feature, purpose and advantage will become more apparent upon:
Fig. 1 is the three-dimensional gesture recognition method flow chart according to the application;
Fig. 2 A is the structural representation of the three-dimension gesture identification system according to the application;
Fig. 2 B is the structured flowchart of terminal unit in the three-dimension gesture identification system according to the application;
Fig. 3 is the three-dimension gesture interaction schematic diagram according to the application;And
Fig. 4 is that the use VR/AR equipment according to the application carries out, with terminal unit, the effect schematic diagram that three-dimension gesture is mutual.
Detailed description of the invention
With embodiment, the application is described in further detail below in conjunction with the accompanying drawings.It is understood that this place is retouched The specific embodiment stated is used only for explaining related invention, rather than the restriction to this invention.It also should be noted that, in order to It is easy to describe, accompanying drawing illustrate only and invent relevant part.
It should be noted that in the case of not conflicting, the embodiment in the application and the feature in embodiment can phases Combination mutually.Describe the application below with reference to the accompanying drawings and in conjunction with the embodiments in detail.
Compared to traditional several gesture interaction modes, prior art there is also and can realize truly simulating hand motion Gesture interaction mode, such as: mode 1, by fixing at least one sensor device at hand, thus the action to hand is entered Row catches, and this gesture interaction mode can truly simulate the action of hand, but the sensor device outside its heavy dependence, Cost is high, volume is big, and portability is poor, the most crucially, in addition it is also necessary to fix sensor on user's hand, and this gives user operation Bring bad experience.Mode 2, by using binocular camera or depth camera to obtain the three-dimensional information of hand, by hands The three-dimensional data processing in portion out, thus simulates real hand motion, although this gesture interaction mode is without increasing at hand Add extra sensor device, but this mode needs to combine PC just can be completed, this be due to simulation process algorithm excessively The complicated requirement to processing chip is too high, therefore depends critically upon the hardware performance of PC, causes it cannot be integrated in focus on portable Realize on the Intelligent mobile equipment of property.
Although it can be seen that both modes can with the action of analog subscriber hand, but it is led due to respective defect Cause cannot be applied in day by day ripe virtual reality and augmented reality field, thus it is more handy to provide the user one The three-dimension gesture interaction schemes that family is experienced.
And the technical scheme that the embodiment that the application provides provides is focused on from virtual reality and augmented reality field, propose one Kind can realize the three-dimension gesture interaction schemes between headset equipment and Intelligent mobile equipment such as virtual reality/augmented reality, Hardware performance is substantially reduced by whole interaction, it is only necessary to addition processing chip or employing Intelligent sliding in headset equipment The process chip that dynamic equipment carries, can complete whole three-dimension gesture interaction.
Refer to Fig. 1, Fig. 1 is the three-dimensional gesture recognition method flow chart according to the application, as it is shown in figure 1, this flow process bag Include following steps (step S102-step S106):
Step S102, the first three dimensional local information of acquisition user's hand, described first three dimensional local information is user's hands The positional information of primary importance point in portion;
Step S104, use predefined gesture prediction algorithm described first three dimensional local information is predicted meter Calculating, obtain the second three dimensional local information and the attitude information of user's hand, described second three dimensional local information is on user's hand The positional information of second position point;And
Step S106, the three-dimensional hand mould that described second three dimensional local information and the input of described attitude information are built in advance Type, obtains the three-dimension gesture that user's hand is corresponding.
Pass through above-mentioned steps, it is possible to achieve virtual reality/augmented reality etc. are between headset equipment and Intelligent mobile equipment Three-dimension gesture interaction schemes, hardware performance is substantially reduced by whole interaction.
In above-mentioned steps S102, for obtaining the process of the first three dimensional local information of user's hand, can be by so Mode realize: first pass through depth transducer and obtain user's hand and the deep image information of surrounding, further according to preset Hand shape and depth characteristic value, from described deep image information, extract described first three dimensional local information.
As a preferably implementation, chip or processing module on depth transducer are only responsible for sampling depth image Information, then can be sent to the terminal units such as smart mobile phone by this deep image information, by the process core on terminal unit The powerful process function of sheet (such as CPU or GPU), is responsible for according to preset hand shape and depth characteristic value, from described depth map As information extracts described first three dimensional local information.
In the embodiment of the present application, described primary importance point is the profile point of user's hand, and described second position point is for using The articulare of family hand, described attitude information is the angle between the skeleton of user's hand.
It is to say, first obtain the depth map of its image-capture field (certainly comprising user's hand) by depth transducer Picture, in actual applications, can gather multiple hand sample data in advance and obtain the shape of user's hand, but overall and Speech, user's hand is substantially and comprises a palmar hand and the shape of five fingers, and, each position (such as, palm of hands Edge, portion, five finger tips) etc. to correspond to the eigenvalue of depth transducer be different, different depth characteristic values can be as district But the locus at point each position, therefore, further according to the hand shape pre-set and depth characteristic value just can by hands from Depth map splits, obtains general outline in one's hands, and further determine that (the most described primary importance of preset point on general outline Point) positional information, namely above-mentioned first three dimensional local information.
In actual applications, can depth transducer be arranged on before headset equipment, make user's hand be positioned at it In image-capture field, certainly, for current VR equipment, much use the mobile devices such as smart mobile phones as its scene Offer equipment, it is contemplated that cost of manufacture and technology maturity, current mobile device mostly use common photographic head and Non-depth transducer, but along with the development of intellectual technology, following Intelligent mobile equipment is the biggest may will use depth sensing Device, if so, can also be not provided with depth transducer on wear-type VR equipment, and directly utilizes and carries on Intelligent mobile equipment Depth transducer.
Also it is provided on wear-type VR equipment being all can on Intelligent mobile equipment it is to say, depth transducer is arranged on The technical scheme of row.
Wherein, the kind for depth transducer does not define, such as, in the application, described depth transducer can To use structure light camera, flight time (Time of Flight, referred to as TOF) camera can also be used.
In the embodiment of the present application, described gesture prediction algorithm be according to predefined degree of depth learning algorithm to multiple deeply The degree of depth training pattern that degree training data obtains after learning.After obtaining above-mentioned first three dimensional local information, it is possible to According to gesture prediction algorithm, the attitude information of acquisition hand and key point are (on user's hand on second position point, such as finger Each articulare) positional information (the most described second three dimensional local information), finally be input to build in advance by both information Three-dimensional hand model, thus drive three-dimensional hand model, export the three-dimension gesture corresponding with active user's hand.
For tradition VR/AR equipment, due to three-dimension gesture and the operation to be performed instruction of user institute of user's hand Being to there is the corresponding relation preset, such as, the action of being affectedly bashful of finger represents and widens virtual display picture, and the list of finger is given directions Hit action to represent and open image content etc..
Therefore, as long as identifying the three-dimension gesture that active user's hand is corresponding, also it is equivalent to obtain active user's hands Portion's operational order to be expressed.In actual applications, as long as being provided with the equipment (such as smart mobile phone) of disposal ability to operation Instruction is analyzed and performs, and can realize the purpose interacted between user and VR/AR equipment.
Corresponding to above-mentioned three-dimensional gesture recognition method, the embodiment of the present application additionally provides a kind of three-dimension gesture identification system, As shown in Fig. 2 A structural representation of the three-dimension gesture identification system according to the application (Fig. 2 A be), this three-dimension gesture identification system Including Wearable 1 and terminal unit 2, wherein:
Described Wearable 1 includes:
For being placed in the head fixed structure 11 of user's head;
Depth transducer 12, for obtaining the deep image information of user's hand and surrounding;
Interface transmission structure 13, is arranged on described depth transducer 12, by described interface transmission structure 13, described deeply Described deep image information can be sent to can be arranged on the terminal in described Wearable 1 by dismountable by degree sensor 12 Equipment 2;
Please also refer to Fig. 2 B, Fig. 2 B is the structural frames of terminal unit in the three-dimension gesture identification system according to the application Figure, as shown in Figure 2 B, described terminal unit 2 may further include:
Extraction module 21, for according to preset hand shape and depth characteristic value, carries from described deep image information Take out described first three dimensional local information;
Computing module 22, is used for using predefined gesture prediction algorithm to carry out pre-to described first three dimensional local information Surveying and calculate, obtain the second three dimensional local information and the attitude information of user's hand, described second three dimensional local information is user's hands The positional information of second position point in portion;
Processing module 23, for inputting, by described second three dimensional local information and described attitude information, the three-dimensional built in advance Hand model, obtains the three-dimension gesture that user's hand is corresponding.
In the embodiment of the present application, described gesture prediction algorithm is to multiple degree of depth according to predefined degree of depth learning algorithm The degree of depth training pattern that training data obtains after learning.Described primary importance point is the profile point of user's hand, described Two location points are the articulare of user's hand, and described attitude information is the angle between the skeleton of user's hand.
In the embodiment of the present application, described depth transducer can use structure light camera, can also use the flight time (Time of Flight, referred to as TOF) camera.Certainly, the kind for depth transducer does not define, actual application In, it is also possible to use other depth transducers or other there is the sensor of similar effect.
In the workflow of three-dimension gesture identification system, for described depth transducer, it can obtain accurate deep Degree diagram data, it can use structure light camera, structure light camera can use the cmos sensor of maturation, apply special red Outer wave band polishing, adds the infrared narrow band pass filter of corresponding wave band, after the demarcation of binocular camera, in conjunction with special Levying matching primitives depth value, it would however also be possible to employ TOF camera, TOF camera is a laser front, by launching and receiving optical signal Phase contrast, directly calculate depth value.
In the embodiment of the present application, use a depth camera and Intelligent mobile equipment (the most above-mentioned terminal unit) fixing also Being connected, it is provided that degree of depth cloud data, Intelligent mobile equipment reads data, and carries out estimating of real-time hand gestures and position Meter.
For described gesture prediction algorithm, setting first by the different visual angles difference attitude manually marking a large amount of hand Training data based on the degree of depth, then utilizes degree of depth learning algorithm training data, obtains a degree of depth training pattern and (is described Gesture prediction algorithm), by this gesture prediction algorithm, the attitude information (the most described attitude information) of the most exportable hand skeleton and The three dimensional local information (the most described second three dimensional local information) of articulare.
It addition, also fictionalize a three-dimensional hand model for user's hand, in use, truthful data is (the most real Time the attitude information that gets and the second three dimensional local information) the three-dimensional hand model of input, i.e. can get the three-dimensional of user's hand Gesture, and then produce and real world hand motion always, so that it is determined that the operational order that the hand motion of user is corresponding.
In the embodiment of the present application, the function of the terminal units such as Intelligent mobile equipment is except providing CPU/ for gesture prediction algorithm GPU calculates outside supporting to complete above-mentioned calculating and to process operation, the scene content of VR/AR to be produced.
For described Wearable, such as VR/AR headset equipment etc., it is mainly used in carrying by Intelligent mobile equipment etc. The two-dimensional video picture of confession generates the video pictures of panorama, and the feeling of immersion enhancing virtual reality and augmented reality application is experienced.
Further, VR/AR application scenarios part, interaction technique can be combined with panoramic video, further enhance The feeling of immersion of virtual reality and augmented reality application is experienced.
In use, Intelligent mobile equipment access VR/AR head and show, generate the Real-time video content of panorama, use Above-mentioned virtual hand model, complete in VR/AR application scenarios is mutual.
For being further appreciated by the work process of each portion part in three-dimension gesture identification system, (Fig. 3 is to be referred to accompanying drawing 3 Three-dimension gesture interaction schematic diagram according to the application), owing to the operation principle of each several part is introduced made above, It is not further detailed combining accompanying drawing 3.
For ease of understanding the process interacted between Wearable and terminal unit in three-dimension gesture identification system, with And it is presented on user's virtual reality effect at the moment, (Fig. 4 is the use VR/AR equipment according to the application and end to be referred to Fig. 4 End equipment carries out the effect schematic diagram that three-dimension gesture is mutual), user use the process of three-dimension gesture identification system carry out below Simple introduction:
First, depth camera is fixed on VR/AR head-mounted display apparatus, then by Intelligent mobile equipment (such as intelligence Can mobile phone) carry or embed in VR/AR head-mounted display apparatus, it is fixed, and with data wire by depth camera and intelligence Mobile phone is connected.From smart mobile phone, open VR/AR application, enter VR/AR application scenarios, hand is extend into depth camera Visual field within, the threedimensional model of the hands of corresponding number can occur in application scenarios.By hand gestures algorithm for estimating mould Intend the different attitudes of hands in real world, thus trigger different gesture interaction actions.Realize naked in VR/AR application scenarios Hands operation high up in the air, thus improve sense of reality and the feeling of immersion of VR/AR application.
In the technical scheme that the embodiment of the present application provides, hardware is used to comprise VR/AR headset equipment, an intelligent mobile One depth transducer (this sensor uses structure light camera or TOF camera) of equipment and connection fixed thereto, utilizes the degree of depth Sensor obtains the degree of depth cloud data of hand, then by hand gestures algorithm for estimating, it is possible to accurately estimate the skeleton of hand Degree of freedom information and the three dimensional local information of articulare, final realization moving alternately in virtual reality and augmented reality are applied Make.Owing to hand need not increase extra sensor device, and the computing of whole algorithm all only depends on mobile phone and the degree of depth The hardware cell of sensor, therefore can meet mobile device to efficiency of algorithm, precision, portable requirement.
It will be appreciated by those skilled in the art that invention scope involved in the application, however it is not limited to above-mentioned technical characteristic The technical scheme of particular combination, also should contain in the case of without departing from described inventive concept, by above-mentioned technology simultaneously Other technical scheme that feature or its equivalent feature carry out combination in any and formed.Such as features described above is with disclosed herein (but not limited to) has the technical characteristic of similar functions and replaces mutually and the technical scheme that formed.

Claims (9)

1. a three-dimensional gesture recognition method, it is characterised in that described method includes:
Obtaining the first three dimensional local information of user's hand, described first three dimensional local information is primary importance point on user's hand Positional information;
Use predefined gesture prediction algorithm to be predicted described first three dimensional local information calculating, obtain user's hand The second three dimensional local information and attitude information, described second three dimensional local information is the position of second position point on user's hand Information;And
Described second three dimensional local information and described attitude information are inputted the three-dimensional hand model built in advance, obtains user's hands The three-dimension gesture that portion is corresponding.
Method the most according to claim 1, it is characterised in that the first three dimensional local information obtaining user's hand includes:
User's hand and the deep image information of surrounding is obtained by depth transducer;And
According to preset hand shape and depth characteristic value, from described deep image information, extract described first three-dimensional position Information.
Method the most according to claim 2, it is characterised in that when described depth transducer is structure light camera or flight Between TOF camera.
Method the most according to claim 1, it is characterised in that described gesture prediction algorithm is according to the predefined degree of depth The degree of depth training pattern that learning algorithm obtains after learning multiple degree of depth training datas.
Method the most according to any one of claim 1 to 4, it is characterised in that described primary importance point is user's hand Profile point, described second position point is the articulare of user's hand, and described attitude information is between the skeleton of user's hand Angle.
6. a three-dimension gesture identification system, including Wearable and terminal unit, it is characterised in that:
Described Wearable includes:
For being placed in the head fixed structure of user's head;
Depth transducer, for obtaining the deep image information of user's hand and surrounding;
Interface transmission structure, is arranged on described depth transducer, by described interface transmission structure, described depth transducer energy Enough it is sent to be arranged on the terminal unit in described Wearable by dismountable by described deep image information;
Described terminal unit includes:
Extraction module, for according to preset hand shape and depth characteristic value, extracts institute from described deep image information State the first three dimensional local information;
Computing module, by using based on described first three dimensional local information is predicted by predefined gesture prediction algorithm Calculating, obtain the second three dimensional local information and the attitude information of user's hand, described second three dimensional local information is on user's hand The positional information of second position point;
Processing module, for inputting, by described second three dimensional local information and described attitude information, the three-dimensional hand mould built in advance Type, obtains the three-dimension gesture that user's hand is corresponding.
System the most according to claim 6, it is characterised in that when described depth transducer is structure light camera or flight Between TOF camera.
System the most according to claim 6, it is characterised in that described gesture prediction algorithm is according to the predefined degree of depth The degree of depth training pattern that learning algorithm obtains after learning multiple degree of depth training datas.
9. according to the system according to any one of claim 6 to 8, it is characterised in that described primary importance point is user's hand Profile point, described second position point is the articulare of user's hand, and described attitude information is between the skeleton of user's hand Angle.
CN201610694390.6A 2016-08-19 2016-08-19 Three-dimensional gesture recognition method and system Pending CN106325509A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610694390.6A CN106325509A (en) 2016-08-19 2016-08-19 Three-dimensional gesture recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610694390.6A CN106325509A (en) 2016-08-19 2016-08-19 Three-dimensional gesture recognition method and system

Publications (1)

Publication Number Publication Date
CN106325509A true CN106325509A (en) 2017-01-11

Family

ID=57744382

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610694390.6A Pending CN106325509A (en) 2016-08-19 2016-08-19 Three-dimensional gesture recognition method and system

Country Status (1)

Country Link
CN (1) CN106325509A (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106843483A (en) * 2017-01-20 2017-06-13 深圳市京华信息技术有限公司 A kind of virtual reality device and its control method
CN107357424A (en) * 2017-06-29 2017-11-17 联想(北京)有限公司 A kind of recognition methods of gesture operation, equipment and computer-readable recording medium
CN107589834A (en) * 2017-08-09 2018-01-16 广东欧珀移动通信有限公司 Terminal device operation method and device, terminal device
CN108303708A (en) * 2018-01-03 2018-07-20 京东方科技集团股份有限公司 Three-dimensional reconstruction system and method, mobile device, eye care method, AR equipment
CN108646920A (en) * 2018-05-16 2018-10-12 Oppo广东移动通信有限公司 Identification interaction method and device, storage medium and terminal equipment
CN108681402A (en) * 2018-05-16 2018-10-19 Oppo广东移动通信有限公司 Identification interaction method and device, storage medium and terminal equipment
CN109002163A (en) * 2018-07-10 2018-12-14 深圳大学 Three-dimension interaction gesture sample method, apparatus, computer equipment and storage medium
CN109635621A (en) * 2017-10-07 2019-04-16 塔塔顾问服务有限公司 For the system and method based on deep learning identification gesture in first person
CN109656355A (en) * 2018-10-23 2019-04-19 西安交通大学 A kind of exchange method and device of mobile phone and other display equipment
CN109934065A (en) * 2017-12-18 2019-06-25 虹软科技股份有限公司 A kind of method and apparatus for gesture identification
CN110597112A (en) * 2019-09-03 2019-12-20 珠海格力电器股份有限公司 Three-dimensional gesture control method of cooking appliance and cooking appliance
CN110945869A (en) * 2017-04-19 2020-03-31 维多尼股份公司 Augmented reality learning system and method using motion-captured virtual hands
CN111047827A (en) * 2019-12-03 2020-04-21 北京深测科技有限公司 Intelligent monitoring method and system for environment-assisted life
CN112198962A (en) * 2020-09-30 2021-01-08 聚好看科技股份有限公司 Method for interacting with virtual reality equipment and virtual reality equipment
CN112487389A (en) * 2020-12-16 2021-03-12 熵基科技股份有限公司 Identity authentication method, device and equipment
CN112858855A (en) * 2021-02-23 2021-05-28 海南电网有限责任公司定安供电局 Multispectral abnormal temperature partial discharge fault comprehensive testing device
CN113496168A (en) * 2020-04-02 2021-10-12 百度在线网络技术(北京)有限公司 Sign language data acquisition method, sign language data acquisition equipment and storage medium
WO2021258862A1 (en) * 2020-06-24 2021-12-30 Oppo广东移动通信有限公司 Typing method and apparatus, and device and storage medium
CN114387836A (en) * 2021-12-15 2022-04-22 上海交通大学医学院附属第九人民医院 Virtual surgery simulation method and device, electronic equipment and storage medium
CN114445676A (en) * 2022-01-12 2022-05-06 广州虎牙科技有限公司 Gesture image processing method, storage medium and equipment
CN118827847A (en) * 2023-04-20 2024-10-22 荣耀终端有限公司 Display method and related device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011106797A1 (en) * 2010-02-28 2011-09-01 Osterhout Group, Inc. Projection triggering through an external marker in an augmented reality eyepiece
CN103839040A (en) * 2012-11-27 2014-06-04 株式会社理光 Gesture identification method and device based on depth images
CN104589356A (en) * 2014-11-27 2015-05-06 北京工业大学 Dexterous hand teleoperation control method based on Kinect human hand motion capturing
CN105138119A (en) * 2015-08-04 2015-12-09 湖南七迪视觉科技有限公司 Stereo vision system with automatic focusing controlled based on human biometrics
CN105302295A (en) * 2015-09-07 2016-02-03 哈尔滨市一舍科技有限公司 Virtual reality interaction device having 3D camera assembly
CN105759967A (en) * 2016-02-19 2016-07-13 电子科技大学 Global hand gesture detecting method based on depth data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011106797A1 (en) * 2010-02-28 2011-09-01 Osterhout Group, Inc. Projection triggering through an external marker in an augmented reality eyepiece
CN103839040A (en) * 2012-11-27 2014-06-04 株式会社理光 Gesture identification method and device based on depth images
CN104589356A (en) * 2014-11-27 2015-05-06 北京工业大学 Dexterous hand teleoperation control method based on Kinect human hand motion capturing
CN105138119A (en) * 2015-08-04 2015-12-09 湖南七迪视觉科技有限公司 Stereo vision system with automatic focusing controlled based on human biometrics
CN105302295A (en) * 2015-09-07 2016-02-03 哈尔滨市一舍科技有限公司 Virtual reality interaction device having 3D camera assembly
CN105759967A (en) * 2016-02-19 2016-07-13 电子科技大学 Global hand gesture detecting method based on depth data

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106843483A (en) * 2017-01-20 2017-06-13 深圳市京华信息技术有限公司 A kind of virtual reality device and its control method
CN110945869A (en) * 2017-04-19 2020-03-31 维多尼股份公司 Augmented reality learning system and method using motion-captured virtual hands
CN107357424A (en) * 2017-06-29 2017-11-17 联想(北京)有限公司 A kind of recognition methods of gesture operation, equipment and computer-readable recording medium
CN107357424B (en) * 2017-06-29 2021-05-18 联想(北京)有限公司 Gesture operation recognition method and device and computer readable storage medium
CN107589834A (en) * 2017-08-09 2018-01-16 广东欧珀移动通信有限公司 Terminal device operation method and device, terminal device
CN107589834B (en) * 2017-08-09 2020-08-07 Oppo广东移动通信有限公司 Terminal equipment operation method and device, terminal equipment
CN109635621A (en) * 2017-10-07 2019-04-16 塔塔顾问服务有限公司 For the system and method based on deep learning identification gesture in first person
CN109635621B (en) * 2017-10-07 2023-04-14 塔塔顾问服务有限公司 System and method for recognizing gestures based on deep learning in first-person perspective
CN109934065A (en) * 2017-12-18 2019-06-25 虹软科技股份有限公司 A kind of method and apparatus for gesture identification
CN108303708A (en) * 2018-01-03 2018-07-20 京东方科技集团股份有限公司 Three-dimensional reconstruction system and method, mobile device, eye care method, AR equipment
CN108303708B (en) * 2018-01-03 2022-07-29 京东方科技集团股份有限公司 Three-dimensional reconstruction system and method, mobile device, eye protection method and AR device
US11346946B2 (en) 2018-01-03 2022-05-31 Beijing Boe Optoelectronics Technology Co., Ltd. Three-dimensional reconstruction system and method, mobile device, eye protection method, AR device
CN108681402A (en) * 2018-05-16 2018-10-19 Oppo广东移动通信有限公司 Identification interaction method and device, storage medium and terminal equipment
CN108646920A (en) * 2018-05-16 2018-10-12 Oppo广东移动通信有限公司 Identification interaction method and device, storage medium and terminal equipment
WO2019218880A1 (en) * 2018-05-16 2019-11-21 Oppo广东移动通信有限公司 Interaction recognition method and apparatus, storage medium, and terminal device
CN109002163A (en) * 2018-07-10 2018-12-14 深圳大学 Three-dimension interaction gesture sample method, apparatus, computer equipment and storage medium
CN109656355A (en) * 2018-10-23 2019-04-19 西安交通大学 A kind of exchange method and device of mobile phone and other display equipment
CN110597112A (en) * 2019-09-03 2019-12-20 珠海格力电器股份有限公司 Three-dimensional gesture control method of cooking appliance and cooking appliance
CN111047827A (en) * 2019-12-03 2020-04-21 北京深测科技有限公司 Intelligent monitoring method and system for environment-assisted life
CN113496168A (en) * 2020-04-02 2021-10-12 百度在线网络技术(北京)有限公司 Sign language data acquisition method, sign language data acquisition equipment and storage medium
WO2021258862A1 (en) * 2020-06-24 2021-12-30 Oppo广东移动通信有限公司 Typing method and apparatus, and device and storage medium
CN112198962A (en) * 2020-09-30 2021-01-08 聚好看科技股份有限公司 Method for interacting with virtual reality equipment and virtual reality equipment
CN112198962B (en) * 2020-09-30 2023-04-28 聚好看科技股份有限公司 Method for interacting with virtual reality equipment and virtual reality equipment
CN112487389A (en) * 2020-12-16 2021-03-12 熵基科技股份有限公司 Identity authentication method, device and equipment
CN112858855A (en) * 2021-02-23 2021-05-28 海南电网有限责任公司定安供电局 Multispectral abnormal temperature partial discharge fault comprehensive testing device
CN114387836A (en) * 2021-12-15 2022-04-22 上海交通大学医学院附属第九人民医院 Virtual surgery simulation method and device, electronic equipment and storage medium
CN114387836B (en) * 2021-12-15 2024-03-22 上海交通大学医学院附属第九人民医院 Virtual operation simulation method and device, electronic equipment and storage medium
CN114445676A (en) * 2022-01-12 2022-05-06 广州虎牙科技有限公司 Gesture image processing method, storage medium and equipment
CN118827847A (en) * 2023-04-20 2024-10-22 荣耀终端有限公司 Display method and related device

Similar Documents

Publication Publication Date Title
CN106325509A (en) Three-dimensional gesture recognition method and system
US11762475B2 (en) AR scenario-based gesture interaction method, storage medium, and communication terminal
Memo et al. Head-mounted gesture controlled interface for human-computer interaction
JP7130057B2 (en) Hand Keypoint Recognition Model Training Method and Device, Hand Keypoint Recognition Method and Device, and Computer Program
JP2022515620A (en) Image area recognition method by artificial intelligence, model training method, image processing equipment, terminal equipment, server, computer equipment and computer program
CN115956259A (en) Generating an underlying real dataset for a virtual reality experience
CN112037314A (en) Image display method, image display device, display equipment and computer readable storage medium
CN107450714A (en) Man-machine interaction support test system based on augmented reality and image recognition
CN109215416A (en) A kind of Chinese character assistant learning system and method based on augmented reality
CN106293099A (en) Gesture identification method and system
CN109074497A (en) Use the activity in depth information identification sequence of video images
CN107944376A (en) The recognition methods of video data real-time attitude and device, computing device
CN109949900B (en) Display method, device, computer equipment and storage medium of three-dimensional pulse wave
KR20200136723A (en) Method and apparatus for generating learning data for object recognition using virtual city model
CN113506377A (en) Teaching training method based on virtual roaming technology
CN115994944A (en) Three-dimensional key point prediction method, training method and related equipment
CN114298268A (en) Image acquisition model training method, image detection method, device and equipment
CN109907741B (en) Three-dimensional pulse wave display method and device, computer equipment and storage medium
CN108682282A (en) A kind of exchange method of the augmented reality version periodic table of chemical element based on ARKit frames
Mahayuddin et al. Vision based 3D gesture tracking using augmented reality and virtual reality for improved learning applications
CN116934959A (en) Particle image generation method and device based on gesture recognition, electronic equipment and medium
CN111258413A (en) Control method and device of virtual object
CN109840948A (en) The put-on method and device of target object based on augmented reality
Shchur et al. Smartphone app with usage of AR technologies-SolAR System
CN101499176B (en) Video game interface method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170111