[go: up one dir, main page]

CN101609618A - Real-time Sign Language Communication System Based on Spatial Coding - Google Patents

Real-time Sign Language Communication System Based on Spatial Coding Download PDF

Info

Publication number
CN101609618A
CN101609618A CNA2008101635548A CN200810163554A CN101609618A CN 101609618 A CN101609618 A CN 101609618A CN A2008101635548 A CNA2008101635548 A CN A2008101635548A CN 200810163554 A CN200810163554 A CN 200810163554A CN 101609618 A CN101609618 A CN 101609618A
Authority
CN
China
Prior art keywords
sign language
space
information
palm
coding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008101635548A
Other languages
Chinese (zh)
Other versions
CN101609618B (en
Inventor
张宁宁
顾容
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN2008101635548A priority Critical patent/CN101609618B/en
Publication of CN101609618A publication Critical patent/CN101609618A/en
Application granted granted Critical
Publication of CN101609618B publication Critical patent/CN101609618B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

一种基于空间编码的实时手语交流系统,包括用于检测佩戴者的手形变换的数据手套、用于检测佩戴者的手所在空间区域的位置跟踪器以及用于根据数据手套和位置跟踪器进行手语识别的智能识别装置,所述智能识别装置包括手语运动信息转化为文本信息模块、文本信息转化为手语信息模块以及用于将文本信息、手语编码信息和动作动画建立对应序列的手语字典数据库。本发明提供一种快速性好、实时性强、成本低的基于空间编码的实时手语交流系统。

Figure 200810163554

A real-time sign language communication system based on spatial coding, including a data glove used to detect the change of the wearer's hand shape, a position tracker used to detect the spatial region where the wearer's hand is located, and a sign language communication system based on the data glove and position tracker An intelligent identification device for identification, the intelligent identification device includes a module for converting sign language motion information into text information, a module for converting text information into sign language information, and a sign language dictionary database for establishing corresponding sequences of text information, sign language coded information and action animation. The invention provides a space code-based real-time sign language communication system with good rapidity, strong real-time performance and low cost.

Figure 200810163554

Description

Real-time hand language AC system based on space encoding
Technical field
The present invention relates to realize between a kind of deaf-mute and the normal person system that exchanges in real time, especially a kind of sign language AC system.
Background technology
Sign language is the language that the deaf-mute uses.It is to be aided with the more stable expression system of expressing one's feelings posture and constituting by the action of hand shape, is a kind of special language of communicating by action/vision.There is more than 2,000 ten thousand person hard of hearing in China, and they mainly use sign language to exchange.Because sign language is not most of people's of society a language commonly used, this has restricted they and social exchanging to a great extent.The development of sign language AC system can address this problem to a certain extent, plays a significant role for the deaf person creates the barrier-free environment aspect, and the Chinese sign language of promoting standard is had great role.
Along with the growth of society to deaf-mute love, more and more scholars expert's sign Language Recognition that begins one's study is to realize the interchange between normal person and the deaf-mute better.Present sign Language Recognition mainly is divided into based on the Sign Language Recognition of data glove with based on the sign Language Recognition of vision (image).For the identification of sign language, should adopt data glove as hand shape input equipment, and adopt position tracker to gather the motion of palm with space-time concurrency.Because compare with video camera, the data that data glove and position tracker are gathered are succinct, accurate, and these two kinds of collecting devices easily obtain the feature that shows the sign language space-time characterisation, as the finger-joint movable information, palm movable informations etc., the data of data glove collection are not subjected to the influence of environmental changes such as illumination.A lot of in the world experts are devoted to the research of sign Language Recognition Method, realized of the conversion of sign language signal to text, acoustic information, also there are some experts to adopt electronic equipment that text message is converted into the sign language animation, thereby realize that the unidirectional sign language of people and machine terminal exchanges.
Yet, the communication disorder of deaf-mute's overwhelming majority be with process that the normal person exchanges in produce.If realize real-time interchange, just necessarily required to respond input-output device rapidly, also require identification and method for transformation efficiently simultaneously, can realize the mutual conversion of sign language and text message in moment.Promote in order further to realize, used equipment cost can not be too expensive.
Summary of the invention
The deficiency poor for the rapidity that overcomes existing sign Language Recognition, that real-time is poor, cost is high the invention provides the real-time hand language AC system based on space encoding that a kind of rapidity is good, real-time, cost is low.
The technical solution adopted for the present invention to solve the technical problems is:
A kind of real-time hand language AC system based on space encoding, the data glove that comprises the hand fractal transform that is used to detect the wearer, be used to detect wearer's position tracker of hand place area of space and the intelligent identification device that is used for carrying out Sign Language Recognition according to data glove and position tracker, described intelligent identification device comprises that sign language motion information is converted into the text message module, text message is converted into the sign language information module and is used for text message, sign language coded message and action animation are set up the sign language dictionary database of corresponding sequence, and described sign language motion information is converted into the text message module and comprises:
Signal data collection unit is used for obtaining according to the baud rate of data glove and position tracker the Frame of time period, obtains importing a series of vector datas of data;
The data pretreatment unit is used for according to data glove wearer's sign language custom the flexibility codomain being defined, and the use location tracker is divided area of space wearer's personal feature simultaneously, the location mouth, left side ear-lobe, auris dextra hangs down, the position of left side shoulder and right shoulder;
Sign language information characteristics extraction unit is used for the data extract gesture information according to data glove input, from direction, the positional information of the extracting data hand of position tracker input, constitutes the proper vector of input sample;
Hand shape information coding unit is used for encoding according to the sign language signal of the proper vector that obtains, and obtains character string, and coding rule is as follows:
(4.1), each finger-joint is divided into three active states---and stretch, half crooked and crooked fully, adopt the binary coding mode that the articulations digitorum manus case of bending is encoded;
(4.2), the locational space to the palm place carries out binary coding;
The output information matching unit is used for inquiring about at the sign language dictionary database according to the numerical value of character string, and the result who inquires obtains text message;
Described text message is converted into the sign language information module and comprises:
Visual human's presentation space defines the unit, is used for the presentation space according to visual human's bone parameter setting visual human, defines visual human's presentation space with the spatial division rule;
The locating device determining unit is used for noting the matrix of motion palm in each regional center of presentation space and relevant bone position;
The input information matching unit is used for retrieving at the sign language dictionary database as key word with the information of input, retrieves the coding that relevant information then extracts the sign language action;
The key frame positioning unit is used for setting visual human's bone in the position location in each zone that the sign language meaning relates to according to the coding of sign language action, obtains the visual human and moves key frame;
Virtual demonstration unit is used for generating the interpolation frame automatically by key frame, obtains the sign language animation of visual human's demonstration, is shown in screen terminal.
As preferred a kind of scheme: in hand shape information coding unit, three kinds of states of finger are encoded to respectively and stretch 00, and half is crooked 01, and fully crooked 10, each finger is then represented the state of the five fingers all according to above-mentioned case of bending coded system with one ten character string.
As preferred another kind of scheme: in hand shape information coding unit, the locational space at palm place is carried out the binary coding process is: will be divided into three spaces more than the mouth, left side ear is " 1001 " with the space on a left side, left side ear is " 1000 " to the space between the auris dextra, and auris dextra is " 1010 " with the space on the right side; With the spatial division below the mouth, more than the shoulder is three spaces, and left ear is " 0001 " with the space on a left side, and left ear is " 0000 " to the space between the auris dextra, and auris dextra is " 0010 " with the space on the right side; The space that shoulder is following also is divided into three spaces, and left side shoulder is " 0101 " with the space on a left side, and a left side is takeed on the space of takeing on the right side and is " 0100 ", and right shoulder is " 0110 " with the space on the right side; Volar direction is according to the XYZ coordinate of the position tracker of palm vector correspondence, and palm up 0010, palm down 0011, palm be towards a left side 0101, and palm is towards right 0100, palm forward 1001 and palm towards health 1000.
Technical conceive of the present invention is: based on sign language action statistical law commonly used, have the hand shape of coding and the division space of gesture by constructing one, proposed to be exclusively used in and realized the method that exchanges in real time between sign language and the text.Introducing according to the coded combination and the sign language dictionary database of hand shape and gesture has provided a kind of fast encoding method, is used for the identification of sign language; Provide a kind of coding/decoding method fast, be used for the synthetic of sign language.This method has the high and fireballing characteristics of decoding of code efficiency.
The present invention proposes a kind of effective sign language dictionary database using method, in order to store the sign language coded message and to promote the synthetic application of sign language.Original animation data that will move is imported the sign language motion data base method one by one, and its database storing amount is big, and running efficiency of system is low, and the sign language composite document of formation is excessive, is not suitable for the real-time hand language translation.The database using method that proposes is a kind of more efficient methods.
Characteristics based on space encoding of the present invention are: the locus of hand mainly concentrates on around the head in (1), the sign language commonly used, and is distributed on a small quantity around the upper part of the body health, so head space is the highest space of sign language motion frequency.Vertical direction is respectively with mouth, takes on to be separation, and be 3 horizontal spaces with spatial division, horizontal direction is 6 longitudinal spaces with left ear, auris dextra, left side shoulder, right shoulder separation with spatial division respectively.This division methods is carried out careful division to the space around the head effectively, roughly division is then carried out in the space around the health, thereby improved the recognition efficiency in space.(2), in advance the sign language space is carried out predefine, make space have adaptivity according to the difference of the individual size of sign language equipment wearer.(3), code efficiency height.Utilize space dividing, fully adopted the binary coding mode in the coding of hand shape and the cataloged procedure of gesture, do not use the appearance of long code word and special code word, can make the memory space of sign language word allusion quotation database littler, read-write efficiency is higher.Utilize this method that sign language motion information is encoded, can be controlled in the 0.75K according to the length data amount of sentence.(4), decoding speed is fast.Utilize coded sequence to be mapped to the characteristics of visual human's bone matrix, can effectively avoid the editing of picture or animation in the decode procedure, can be easy to realize therefore that sign language is synthetic.(5), method is simple, it is convenient to realize.Whole algorithm only adopts the scale-of-two matching operation, has avoided complex calculations, is a kind of simple and efficient coding method, can transplant on the development platform of a plurality of different editions easily.
Beneficial effect of the present invention mainly shows: (1), by means of sign language motion space dividing and coding, realized the real-time function that sign language exchanges, reduced insignificant wait and response time; (2), have the extraction that the data glove of pointing bend sensor just can realize hand-shaped characteristic, the cost of reduction system support facility; (3), the coding be efficiently, the wearer can improve the sign language dictionary database by the input of sign language, has guaranteed the completeness of sign language vocabulary; (4), used fast synthetic sign language method can guarantee that sign language is synthetic and has a fast speeds; (5), the real-time AC system of sign language is the system with training property, and the sign language teaching and training can be provided for the user who is ignorant of sign language.
Description of drawings
Fig. 1 is a real-time hand language AC system framework synoptic diagram.
Fig. 2 is a hand shape coding key synoptic diagram.
Fig. 3 is the synoptic diagram of sign language spatial division.
Fig. 4 is a volar direction witness mark coordinate synoptic diagram.
Fig. 5 is Chinese manual alphabet figure.
Fig. 6 is the synoptic diagram of " hello " sign language.
Fig. 7 is the synoptic diagram of " Nice to see you " sign language.
Embodiment
Below in conjunction with accompanying drawing the present invention is further described.
With reference to Fig. 1~Fig. 7, a kind of real-time hand language AC system based on space encoding, the data glove that comprises the hand fractal transform that is used to detect the wearer, be used to detect wearer's position tracker of hand place area of space and the intelligent identification device that is used for carrying out Sign Language Recognition according to data glove and position tracker, described intelligent identification device comprises that sign language motion information is converted into the text message module, text message is converted into the sign language information module and is used for text message, sign language coded message and action animation are set up the sign language dictionary database of corresponding sequence, described sign language motion information is converted into the text message module and comprises: signal data collection unit, be used for obtaining the Frame of time period, obtain importing a series of vector datas of data according to the baud rate of data glove and position tracker; The data pretreatment unit is used for according to data glove wearer's sign language custom the flexibility codomain being defined, and the use location tracker is divided area of space wearer's personal feature simultaneously, the location mouth, left side ear-lobe, auris dextra hangs down, the position of left side shoulder and right shoulder; Sign language information characteristics extraction unit is used for the data extract gesture information according to data glove input, from direction, the positional information of the extracting data hand of position tracker input, constitutes the proper vector of input sample; Hand shape information coding unit is used for encoding according to the sign language signal of the proper vector that obtains, and obtains character string, and coding rule is as follows:
(4.1), each finger-joint is divided into three active states---and stretch, half crooked and crooked fully, adopt the binary coding mode that the articulations digitorum manus case of bending is encoded;
(4.2), the locational space to the palm place carries out binary coding;
The output information matching unit is used for inquiring about at the sign language dictionary database according to the numerical value of character string, and the result who inquires obtains text message;
Described text message is converted into the sign language information module and comprises: visual human's presentation space defines the unit, is used for the presentation space according to visual human's bone parameter setting visual human, defines visual human's presentation space with the spatial division rule; The locating device determining unit is used for noting the matrix of motion palm in each regional center of presentation space and relevant bone position; The input information matching unit is used for retrieving at the sign language dictionary database as key word with the information of input, retrieves the coding that relevant information then extracts the sign language action; The key frame positioning unit is used for setting visual human's bone in the position location in each zone that the sign language meaning relates to according to the coding of sign language action, obtains the visual human and moves key frame; Virtual demonstration unit is used for generating the interpolation frame automatically by key frame, obtains the sign language animation of visual human's demonstration, is shown in screen terminal.
With reference to Fig. 2, Fig. 3, encode according to the movable information of gesture: by the sign language vocabulary experimental result, taking into full account the real-time that sign language exchanges, the sign Language Recognition Method based on space encoding that this paper provides is a kind of stable performance, discerns method rapidly.This method is carried out application note on Chinese sign language vocabulary.
When the user wears data glove, according to the size of user's hand and user different definition threshold values to degree of crook.As data gloves raw data scope is 0~4095, for forefinger, corresponding data show that for the numerical table less than 1862 finger is in straight configuration, displayed value is that the numeric representation finger between the 1862-2268 is in half case of bending, and displayed value is that the numerical value between the 2268-4095 represents that then finger is in complete case of bending.Because the degree of flexibility difference of each finger, the threshold values of each finger also is not quite similar.
Position tracker is fixed on user's the wrist, and the absolute coordinates initial point is positioned at the receiver position.The angle information that position tracker obtains can be determined volar direction, with the user with vertical direction respectively with mouth, shoulder is 3 horizontal spaces for separation with spatial division, and horizontal direction is 6 longitudinal spaces with left ear, auris dextra, left side shoulder, right shoulder separation with spatial division respectively.
Coding method of the present invention is: according to for the hand shape and the gesture information that obtain on the described terminal, set the encoded radio of each hand shape and gesture information successively, obtain bit stream data, corresponding dynamic link table sequence storage time of this code table.Change situation according to time and hand shape can be divided into static gesture, static compound gesture, and dynamic gesture.
(1), static gesture: the sign language hand shape of correspondence as shown in table 1, coding rule is as shown in table 1.Table 1 is the corresponding coding of Chinese character manual alphabet table:
Letter Volar direction Hand shape coding
??A ??0100 ??0010101010
??B ??1001 ??1000000000
??C ??0101 ??0101010101
??D ??1001 ??1010101010
??E ??1000 ??1010000000
??F ??1000 ??1000000101
??G ??1000 ??1000101010
??H ??1001 ??1000001010
??I ??1001 ??1000101010
??J ??0101 ??1001101010
??K ??0101 ??0000001010
??L ??0101 ??0000101010
??M ??1001 ??1001010110
??N ??1001 ??1001011010
??O ??0101 ??1001010101
??P ??0101 ??1010000000
??Q ??1001 ??0001011010
??R ??1000 ??0000101010
??S ??1001 ??0010101010
??T ??1001 ??1000101000
??U ??1001 ??0000000000
??V ??1001 ??0100001010
??W ??1001 ??0100000010
??X ??1001 ??0101001010
??Y ??1001 ??0010101000
??Z ??1000 ??0100101000
??ZH ??1000 ??0100001000
??CH ??0011 ??0001010101
??SH ??1001 ??0001011010
??NG ??1000 ??1010101000
Table 1
(2), static compound gesture: " hello " sign language as shown in Figure 6, according to each seasonal effect in time series hand shape of sequence of event, as shown in table 2
Time series Volar direction The palm position Hand shape coding
??1 ??0011 ??0100 ??1000101010
??2 ??0011 ??0100 ??1001101010
??3 ??0011 ??0100 ??0010101010
Table 2
(3), dynamic gesture: " Nice to see you " sign language as shown in Figure 7, according to each seasonal effect in time series hand shape of sequence of event, as shown in table 3, the left hand hand is just as Li Kede.
Time series Volar direction The palm position Hand shape coding
??1 ??1000 ??0100 ??0000000000
??2 ??1000 ??0100 ??1000101010
??3 ??0010 ??0100 ??1000101010
??4 ??1000 ??0100 ??0000000000
??5 ??0011 ??1000 ??1000001010
??6 ??0101 ??0001 ??1000101010
??7 ??0101 ??0100 ??1000101010
Table 3
Above-mentioned coded message is carried out information matches via the sign language dictionary database, if each coding on corresponding each time series is all identical, then exports corresponding literal; If the coding difference on corresponding each time series, then output " unknown sign language information ", the user can demonstrate sign language again with affirmation sign language information or as new sign language information input sign language dictionary database.
When the user who is ignorant of sign language see Word message and with Word message when responding, set visual human's presentation space with reference to Fig. 3.Control the virtual bone of going into, make the palm position through each regional center in the presentation space, and record bone joint matrix and relevant information converting.Word message with input is retrieved in the sign language data dictionary as key word, retrieves the coding that relevant information then extracts the sign language action, if retrieval is less than then reminding the user to re-enter text message.According to the time series of sign language action be provided with animation time and each crucial moment point.Extract the palm coding set visual human's bone in each zone shown in Figure 3 each at crucial moment the point location position, extract the volar direction coding and set the direction of virtual staff bone, extract hand shape coding setting visual human and point the action of bone, thereby obtain the key frame in each time series; Generate the interpolation frame by key frame, obtain the sign language key-frame animation of visual human's demonstration, be shown in screen terminal.

Claims (3)

1、一种基于空间编码的实时手语交流系统,包括用于检测佩戴者的手形变换的数据手套、用于检测佩戴者的手所在空间区域的位置跟踪器以及用于根据数据手套和位置跟踪器进行手语识别的智能识别装置,其特征在于:所述智能识别装置包括手语运动信息转化为文本信息模块、文本信息转化为手语信息模块以及用于将文本信息、手语编码信息和动作动画建立对应序列的手语字典数据库,所述手语运动信息转化为文本信息模块包括:1. A real-time sign language communication system based on spatial coding, including a data glove for detecting the change of the wearer's hand shape, a position tracker for detecting the space area where the wearer's hand is located, and a data glove and position tracker for The intelligent recognition device for sign language recognition is characterized in that: the intelligent recognition device includes a module for converting sign language motion information into text information, a module for converting text information into sign language information, and a module for establishing a corresponding sequence of text information, sign language coding information and action animation The sign language dictionary database, the sign language movement information into text information module includes: 信号数据采集单元,用于根据数据手套和位置跟踪器的波特率得到时间段的数据帧,得到输入数据的一系列向量数据;The signal data acquisition unit is used to obtain the data frame of the time period according to the baud rate of the data glove and the position tracker, and obtain a series of vector data of the input data; 数据预处理单元,用于根据数据手套佩戴者的手语习惯对弯曲度值域进行界定,同时使用位置跟踪器对佩戴者的个体特征对空间区域进行划分,定位嘴,左耳垂,右耳垂,左肩以及右肩的位置;The data preprocessing unit is used to define the curvature value range according to the sign language habit of the wearer of the data glove, and at the same time use the position tracker to divide the space area according to the individual characteristics of the wearer, locate the mouth, left earlobe, right earlobe, left shoulder and the position of the right shoulder; 手语信息特征提取单元,用于根据数据手套输入的数据提取手势信息,从位置跟踪器输入的数据中提取手的方向、位置信息,构成输入样本的特征向量;The sign language information feature extraction unit is used to extract gesture information according to the data input by the data glove, extract the direction and position information of the hand from the data input by the position tracker, and form the feature vector of the input sample; 手形信息编码单元,用于根据获得的特征向量的手语信号进行编码,得到字符串,编码规则如下:The hand shape information encoding unit is used for encoding according to the sign language signal of the obtained eigenvector to obtain a character string, and the encoding rules are as follows: (4.1)、每个手指关节分为三个活动状态——伸直、半弯曲以及完全弯曲,采用二进制编码方式对指关节弯曲状态进行编码;(4.1), each finger joint is divided into three active states - straight, semi-bent and fully bent, and the knuckle bending state is coded by binary coding; (4.2)、对手掌所在的位置空间进行二进制编码;(4.2), perform binary coding on the position space where the palm is located; 输出信息匹配单元,用于根据字符串的数值在手语字典数据库中进行查询,查询到的结果得到文本信息;The output information matching unit is used to query in the sign language dictionary database according to the numerical value of the character string, and the result of the query is obtained to obtain text information; 所述文本信息转化为手语信息模块包括:Described text information is converted into sign language information module and comprises: 虚拟人的演示空间界定单元,用于根据虚拟人的骨骼参数设定虚拟人的演示空间,以空间划分规则界定虚拟人的演示空间;The demonstration space defining unit of the virtual human is used to set the demonstration space of the virtual human according to the skeleton parameters of the virtual human, and define the demonstration space of the virtual human with space division rules; 定位装置确定单元,用于记录下运动手掌在演示空间中各个区域的中心位置和相关的骨骼位置的矩阵;The positioning device determination unit is used to record the center position of the moving palm in each area in the demonstration space and the matrix of the relevant bone positions; 输入信息匹配单元,用于以输入的信息作为关键字在手语字典数据库中进行检索,检索到相关信息则提取出手语动作的编码;The input information matching unit is used to use the input information as a keyword to search in the sign language dictionary database, and when relevant information is retrieved, the code of the sign language action is extracted; 关键帧定位单元,用于根据手语动作的编码设定虚拟人骨骼在手语意义涉及的各个区域中的定位位置,得到虚拟人动作关键帧;The key frame positioning unit is used to set the positioning position of the virtual human skeleton in each area involved in the sign language meaning according to the coding of the sign language action, and obtain the key frame of the virtual human action; 虚拟演示单元,用于由关键帧自动生成插补帧,得到虚拟人演示的手语动画,显示于屏幕终端。The virtual demonstration unit is used to automatically generate an interpolation frame from the key frame to obtain a sign language animation demonstrated by the virtual person and display it on the screen terminal. 2、如权利要求1所述的基于空间编码的实时手语交流系统,其特征在于:在手形信息编码单元中,将手指的三种状态分别编码为伸直00,半弯曲01,完全弯曲10,每个手指都按照上述的弯曲状态编码方式,则用一个十位的字符串来表示五指的状态。2. The real-time sign language communication system based on spatial coding according to claim 1, characterized in that: in the hand shape information coding unit, the three states of the finger are respectively coded as straight 00, half bent 01, fully bent 10, Each finger is encoded according to the above-mentioned bending state, and a string of ten digits is used to represent the state of the five fingers. 3、如权利要求1或2所述的基于空间编码的实时手语交流系统,其特征在于:在手形信息编码单元中,对手掌所在的位置空间进行二进制编码过程为:将嘴部以上划分为三个空间,左耳以左的空间为“1001”,左耳到右耳之间的空间为“1000”,右耳以右的空间为“1010”;将嘴部以下、肩以上的空间划分为三个空间,左耳以左的空间为“0001”,左耳到右耳之间的空间为“0000”,右耳以右的空间为“0010”;将肩以下的空间也划分为三个空间,左肩以左的空间为“0101”,左肩到右肩的空间为“0100”,右肩以右的空间为“0110”;手掌方向根据手掌矢量对应的位置跟踪器的XYZ坐标,手掌朝上0010,手掌朝下0011,手掌朝左0101,手掌朝右0100,手掌朝前1001以及手掌朝身体1000。3. The real-time sign language communication system based on space coding according to claim 1 or 2, characterized in that: in the hand shape information coding unit, the process of binary coding the position space where the palm is located is as follows: divide the above mouth into three parts The space from the left ear to the left is "1001", the space from the left ear to the right ear is "1000", and the space from the right ear to the right is "1010"; the space below the mouth and above the shoulders is divided into Three spaces, the space from the left ear to the left is "0001", the space from the left ear to the right ear is "0000", and the space from the right ear to the right is "0010"; the space below the shoulders is also divided into three Space, the space from the left shoulder to the left is "0101", the space from the left shoulder to the right shoulder is "0100", and the space from the right shoulder to the right is "0110"; the palm direction is based on the XYZ coordinates of the position tracker corresponding to the palm vector, and the palm faces 0010 up, 0011 palm down, 0101 palm left, 0100 palm right, 1001 palm forward and 1000 palm toward body.
CN2008101635548A 2008-12-23 2008-12-23 Real-time hand language communication system based on special codes Expired - Fee Related CN101609618B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008101635548A CN101609618B (en) 2008-12-23 2008-12-23 Real-time hand language communication system based on special codes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008101635548A CN101609618B (en) 2008-12-23 2008-12-23 Real-time hand language communication system based on special codes

Publications (2)

Publication Number Publication Date
CN101609618A true CN101609618A (en) 2009-12-23
CN101609618B CN101609618B (en) 2012-05-30

Family

ID=41483357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008101635548A Expired - Fee Related CN101609618B (en) 2008-12-23 2008-12-23 Real-time hand language communication system based on special codes

Country Status (1)

Country Link
CN (1) CN101609618B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102193633A (en) * 2011-05-25 2011-09-21 广州畅途软件有限公司 dynamic sign language recognition method for data glove
CN102723019A (en) * 2012-05-23 2012-10-10 苏州奇可思信息科技有限公司 Sign language teaching system
CN103309434A (en) * 2012-03-12 2013-09-18 联想(北京)有限公司 Instruction identification method and electronic equipment
CN103337079A (en) * 2013-07-09 2013-10-02 广州新节奏智能科技有限公司 Virtual augmented reality teaching method and device
CN104134060A (en) * 2014-08-03 2014-11-05 上海威璞电子科技有限公司 Sign language interpreting, displaying and sound producing system based on electromyographic signals and motion sensors
CN104462162A (en) * 2013-11-25 2015-03-25 安徽寰智信息科技股份有限公司 Novel sign language recognition and collection method and device
CN104599553A (en) * 2014-12-29 2015-05-06 闽南师范大学 Barcode recognition-based sign language teaching system and method
CN104765455A (en) * 2015-04-07 2015-07-08 中国海洋大学 Man-machine interactive system based on striking vibration
CN106056994A (en) * 2016-08-16 2016-10-26 安徽渔之蓝教育软件技术有限公司 Assisted learning system for gesture language vocational education
CN104599554B (en) * 2014-12-29 2017-01-25 闽南师范大学 A sign language teaching system and method based on two-dimensional code recognition
CN108427910A (en) * 2018-01-30 2018-08-21 浙江凡聚科技有限公司 Deep-neural-network AR sign language interpreters learning method, client and server
CN110491250A (en) * 2019-08-02 2019-11-22 安徽易百互联科技有限公司 A kind of deaf-mute's tutoring system
CN113657101A (en) * 2021-07-20 2021-11-16 北京搜狗科技发展有限公司 A data processing method, device and device for data processing

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05241496A (en) 1992-02-27 1993-09-21 Toshiba Corp Finger language interpretation device
CN1506871A (en) * 2002-12-06 2004-06-23 徐晓毅 Sign language translating system
CN1664807A (en) * 2005-03-21 2005-09-07 山东省气象局 Adaptation of dactylology weather forecast in network
CN101005574A (en) * 2006-01-17 2007-07-25 上海中科计算技术研究所 Video frequency virtual humance sign language compiling system

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102193633A (en) * 2011-05-25 2011-09-21 广州畅途软件有限公司 dynamic sign language recognition method for data glove
CN103309434B (en) * 2012-03-12 2016-03-30 联想(北京)有限公司 A kind of instruction identification method and electronic equipment
CN103309434A (en) * 2012-03-12 2013-09-18 联想(北京)有限公司 Instruction identification method and electronic equipment
CN102723019A (en) * 2012-05-23 2012-10-10 苏州奇可思信息科技有限公司 Sign language teaching system
CN103337079A (en) * 2013-07-09 2013-10-02 广州新节奏智能科技有限公司 Virtual augmented reality teaching method and device
CN104462162A (en) * 2013-11-25 2015-03-25 安徽寰智信息科技股份有限公司 Novel sign language recognition and collection method and device
CN104134060A (en) * 2014-08-03 2014-11-05 上海威璞电子科技有限公司 Sign language interpreting, displaying and sound producing system based on electromyographic signals and motion sensors
CN104134060B (en) * 2014-08-03 2018-01-05 上海威璞电子科技有限公司 Sign language interpreter and display sonification system based on electromyographic signal and motion sensor
CN104599553A (en) * 2014-12-29 2015-05-06 闽南师范大学 Barcode recognition-based sign language teaching system and method
CN104599554B (en) * 2014-12-29 2017-01-25 闽南师范大学 A sign language teaching system and method based on two-dimensional code recognition
CN104599553B (en) * 2014-12-29 2017-01-25 闽南师范大学 A sign language teaching system and method based on barcode recognition
CN104765455A (en) * 2015-04-07 2015-07-08 中国海洋大学 Man-machine interactive system based on striking vibration
CN106056994A (en) * 2016-08-16 2016-10-26 安徽渔之蓝教育软件技术有限公司 Assisted learning system for gesture language vocational education
CN108427910A (en) * 2018-01-30 2018-08-21 浙江凡聚科技有限公司 Deep-neural-network AR sign language interpreters learning method, client and server
CN110491250A (en) * 2019-08-02 2019-11-22 安徽易百互联科技有限公司 A kind of deaf-mute's tutoring system
CN113657101A (en) * 2021-07-20 2021-11-16 北京搜狗科技发展有限公司 A data processing method, device and device for data processing

Also Published As

Publication number Publication date
CN101609618B (en) 2012-05-30

Similar Documents

Publication Publication Date Title
CN101609618A (en) Real-time Sign Language Communication System Based on Spatial Coding
CN101577062B (en) Space encoding-based method for realizing interconversion between sign language motion information and text message
Sincan et al. Autsl: A large scale multi-modal turkish sign language dataset and baseline methods
Yang et al. Handling movement epenthesis and hand segmentation ambiguities in continuous sign language recognition using nested dynamic programming
CN103246891B (en) A kind of Chinese Sign Language recognition methods based on Kinect
CN110705390A (en) Body posture recognition method and device based on LSTM and storage medium
CN108776773A (en) A kind of three-dimensional gesture recognition method and interactive system based on depth image
CN101527092A (en) Computer assisted hand language communication method under special session context
CN105868715A (en) Hand gesture identifying method, apparatus and hand gesture learning system
CN105426850A (en) Human face identification based related information pushing device and method
CN109190578A (en) The sign language video interpretation method merged based on convolution network with Recognition with Recurrent Neural Network
CN102831380A (en) Body action identification method and system based on depth image induction
CN106097835B (en) Deaf-mute communication intelligent auxiliary system and communication method
CN105536205A (en) Upper limb training system based on monocular video human body action sensing
CN111723779B (en) Chinese sign language recognition system based on deep learning
CN111126280B (en) Gesture recognition fusion-based aphasia patient auxiliary rehabilitation training system and method
CN115188074A (en) An interactive sports training evaluation method, device, system and computer equipment
KR20210018028A (en) Handwriting and arm movement learning-based sign language translation system and method
CN118969009A (en) A method for synthesizing interactive digital humans based on speech-driven artificial intelligence
Zhao et al. Hand gesture recognition based on deep learning
CN110516114A (en) Automatic labeling method and terminal of motion database based on attitude
CN112487951B (en) Sign language recognition and translation method
Ji et al. 3D hand gesture coding for sign language learning
CN111079661B (en) sign language recognition system
Tian et al. Survey of deep face manipulation and fake detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120530

CF01 Termination of patent right due to non-payment of annual fee