[go: up one dir, main page]

CN113706977A - Playing method and system based on intelligent sign language translation software - Google Patents

Playing method and system based on intelligent sign language translation software Download PDF

Info

Publication number
CN113706977A
CN113706977A CN202010814396.9A CN202010814396A CN113706977A CN 113706977 A CN113706977 A CN 113706977A CN 202010814396 A CN202010814396 A CN 202010814396A CN 113706977 A CN113706977 A CN 113706977A
Authority
CN
China
Prior art keywords
voice data
text
sign language
voice
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010814396.9A
Other languages
Chinese (zh)
Inventor
杨阳
张小兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sure enough, barrier free technology (Suzhou) Co.,Ltd.
Original Assignee
Suzhou Yunguo Xinxin Film And Television Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Yunguo Xinxin Film And Television Technology Co ltd filed Critical Suzhou Yunguo Xinxin Film And Television Technology Co ltd
Priority to CN202010814396.9A priority Critical patent/CN113706977A/en
Publication of CN113706977A publication Critical patent/CN113706977A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B21/00Teaching, or communicating with, the blind, deaf or mute
    • G09B21/009Teaching or communicating with deaf persons
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/04Segmentation; Word boundary detection
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • G10L21/10Transforming into visible information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • G10L21/18Details of the transformation process

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Business, Economics & Management (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a playing method and a system based on intelligent sign language translation software for translating languages, wherein the method comprises the following steps: acquiring input voice data; converting the voice data into text data; analyzing the text data to obtain an action sequence; and playing the sign language animation corresponding to the action sequence. The playing method and the playing system based on the intelligent sign language translation software have the following beneficial effects that: (1) the sign language animation is used for playing the voice in real time, manual sign language translation is not needed, the communication cost is reduced, and the communication efficiency between the hearing-impaired person and the outside is improved; (2) the method and the device automatically analyze the outside voice, then convert the outside voice into the text, and play the sign language animation according to the text, thereby expanding the communication range between the hearing-impaired person and the outside and greatly facilitating the hearing-impaired person.

Description

Playing method and system based on intelligent sign language translation software
Technical Field
The invention belongs to the technical field of sign language translation, and particularly relates to a playing method and system based on intelligent sign language translation software.
Background
The sign language is a certain meaning or word formed by simulating images or syllables according to the change of gestures by using gesture proportional actions, is a hand language for hearing-impaired or non-verbal people to mutually communicate and exchange ideas, is an important auxiliary tool of audio language, and is a main communication tool for hearing-impaired people.
At present, communication between normal people and hearing-impaired people is usually realized manually by a sign language interpreter, the normal people use voices to communicate with the sign language interpreter, then the sign language interpreter uses the sign languages to communicate with the hearing-impaired people, and then the voices are used to communicate with the normal people. This approach greatly facilitates communication between normal and hearing impaired people, but at high cost and with low communication efficiency.
Disclosure of Invention
In order to solve the above problems, the present invention provides a playing method based on intelligent sign language translation software, which comprises the following steps:
acquiring input voice data;
converting the voice data into text data;
analyzing the text data to obtain an action sequence;
and playing the sign language animation corresponding to the action sequence.
Preferably, the acquiring of the input voice data includes the steps of:
presetting a voice data receiving area and voice data receiving time;
judging whether voice data enter the voice data receiving area within the voice data receiving time;
if yes, receiving the voice data;
if not, outputting a first alarm signal.
Preferably, the acquiring the input voice data further comprises the steps of:
presetting voice data receiving intensity;
judging whether the intensity of the voice data reaches the receiving intensity of the voice data;
if yes, receiving the voice data;
if not, outputting a second alarm signal.
Preferably, the acquiring the input voice data further comprises the steps of:
presetting a voice data receiving format;
judging whether the format of the voice data reaches the voice data receiving format or not;
if yes, receiving the voice data;
if not, outputting a third alarm signal.
Preferably, the converting the voice data into text data comprises the steps of:
acquiring the voice data;
performing word segmentation operation on the voice data to obtain voice words;
judging whether the similarity of each voice word and a standard voice word exceeds a preset threshold value or not;
if yes, replacing the voice word with the standard voice word;
if not, replacing the voice words with high-frequency standard voice words with pronunciation similar to the voice words.
Preferably, the parsing the text data to obtain an action sequence comprises the steps of:
acquiring the text data;
performing word segmentation operation on the text data to obtain text words;
judging whether the text word belongs to a preset part-of-speech text word or not;
if yes, the text data is reserved;
and if not, deleting the text data.
Preferably, after the step of retaining the text data, the method further comprises the steps of:
acquiring the text data;
performing word segmentation operation on the text data to obtain text words;
judging whether the similarity of each text word and a standard text word exceeds a preset threshold value or not;
if yes, replacing the text word with the standard text word;
and if not, replacing the text words with high-frequency standard text words similar to the text words.
Preferably, the playing the sign language animation corresponding to the motion sequence comprises the steps of:
acquiring each action operation in the action sequence;
acquiring sign language animation actions corresponding to each action operation;
sequencing each sign language animation action according to the relative sequence of each action operation in the action sequence;
and playing each sign language animation action according to the sequence.
Preferably, the step of obtaining the sign language animation action corresponding to each action operation includes the steps of:
judging whether the sign language animation action corresponding to the action operation is preset or not;
if so, calling the sign language animation action;
and if not, creating a sign language animation action corresponding to the action operation.
The invention also provides a playing system based on the intelligent sign language translation software, which comprises:
the voice data acquisition module is used for acquiring input voice data;
the voice data conversion module is used for converting the voice data into text data;
the text data analysis module is used for analyzing the text data to obtain an action sequence;
and the sign language animation playing module is used for playing the sign language animation corresponding to the action sequence.
The playing method and the playing system based on the intelligent sign language translation software have the following beneficial effects that:
(1) the sign language animation is used for playing the voice in real time, manual sign language translation is not needed, the communication cost is reduced, and the communication efficiency between the hearing-impaired person and the outside is improved;
(2) the method and the device automatically analyze the outside voice, then convert the outside voice into the text, and play the sign language animation according to the text, thereby expanding the communication range between the hearing-impaired person and the outside and greatly facilitating the hearing-impaired person.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart of a playing method based on intelligent sign language translation software for translating languages according to the present invention;
fig. 2 is a schematic diagram of the playing system based on the intelligent sign language translation software for translating languages according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings in conjunction with the following detailed description. It should be understood that the description is intended to be exemplary only, and is not intended to limit the scope of the present invention. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
Referring to fig. 1, in the embodiment of the present application, the present invention provides a playing method based on intelligent sign language translation software for translating languages, the method includes the steps of:
s1: acquiring input voice data;
s2: converting the voice data into text data;
s3: analyzing the text data to obtain an action sequence;
s4: and playing the sign language animation corresponding to the action sequence.
In the embodiment of the application, the voice data input from the outside is firstly obtained, then the voice data is converted into the text data, then the text data is analyzed to obtain the action sequence, and finally the sign language animation corresponding to the action sequence is played.
The present application is described in detail below with specific examples.
The normal person A sends out voice data 'sit' to the hearing impaired person B, the hearing impaired person B can obtain the voice data through the mobile terminal, the mobile terminal converts the voice data of 'sit' into corresponding 'sit' text data, analyzes the 'sit' text data to obtain an action sequence 'please' and 'sit', and then plays sign language animations corresponding to the action sequence 'please' and 'sit'.
In the embodiment of the present application, the acquiring of the input voice data in step S1 includes the steps of:
presetting a voice data receiving area and voice data receiving time;
judging whether voice data enter the voice data receiving area within the voice data receiving time;
if yes, receiving the voice data;
if not, outputting a first alarm signal.
In the embodiment of the present application, when a mobile terminal is used to obtain input voice data, a voice data receiving area and a voice data receiving time need to be preset for the mobile terminal, where the voice data receiving area refers to an area where the mobile terminal can obtain the voice data, for example, the mobile terminal can obtain voice data in a sphere area formed by 10cm around itself, and voice data exceeding the area cannot be received due to too weak signals. The voice data receiving time refers to the time length that the mobile terminal can obtain the voice data, for example, within 10s after a button for receiving the voice data by the mobile terminal is pressed, if no voice data enters the voice data receiving area, the mobile terminal automatically closes the function of receiving the voice data; if voice data enter the voice data receiving area within 10s after a button for receiving the voice data of the mobile terminal is pressed, the mobile terminal continues to maintain the function of receiving the voice data. When voice data enters the voice data receiving area within the voice data receiving time, the voice data can be continuously received; if no voice data enters the voice data receiving area within the voice data receiving time, a first alarm signal can be output, such as 'stop receiving voice data' or 'failure in receiving voice data', and the first alarm signal can be played by voice or displayed on a screen of the mobile terminal by characters.
In the embodiment of the present application, the acquiring of the input voice data in step S1 further includes the steps of:
presetting voice data receiving intensity;
judging whether the intensity of the voice data reaches the receiving intensity of the voice data;
if yes, receiving the voice data;
if not, outputting a second alarm signal.
In the embodiment of the present application, when the mobile terminal is used to obtain the input voice data, the voice data receiving strength needs to be preset for the mobile terminal, where the voice data receiving strength refers to the minimum strength of the voice data itself when the mobile terminal can successfully obtain the voice data, for example, the mobile terminal can obtain 60 db voice data in a sphere area formed by 10cm around itself, and the voice data lower than 60 db in the area cannot be successfully received due to too weak strength. When the intensity of the voice data entering the voice data receiving area reaches the voice data receiving intensity, the voice data can be successfully received; if the intensity of the voice data entering the voice data receiving area does not reach the voice data receiving intensity, a second alarm signal can be output, for example, the intensity of the voice data is too low or the voice data receiving fails, and the second alarm signal can be played by voice or displayed on a screen of the mobile terminal by characters.
In the embodiment of the present application, the acquiring of the input voice data in step S1 further includes the steps of:
presetting a voice data receiving format;
judging whether the format of the voice data reaches the voice data receiving format or not;
if yes, receiving the voice data;
if not, outputting a third alarm signal.
In the embodiment of the present application, when the mobile terminal is used to obtain the input voice data, a voice data receiving format needs to be preset for the mobile terminal, where the voice data receiving format refers to a format of the voice data itself when the mobile terminal can successfully obtain the voice data, for example, the mobile terminal can receive the voice data expressed in the mandarin format, and the voice data expressed in the dialect format is received without sending. When the intensity of the voice data entering the voice data receiving area reaches the voice data receiving intensity, whether the format of the voice data reaches the voice data receiving format or not can be judged; if the format meets the requirement, the voice data is successfully received; if the format is not in accordance with the requirement, a third alarm signal can be output, such as 'voice data format can not be identified' or 'voice data receiving fails', and the third alarm signal can be played through voice or displayed on a screen of the mobile terminal through characters.
In the embodiment of the present application, the converting the voice data into text data in step S2 includes the steps of:
acquiring the voice data;
performing word segmentation operation on the voice data to obtain voice words;
judging whether the similarity of each voice word and a standard voice word exceeds a preset threshold value or not;
if yes, replacing the voice word with the standard voice word;
if not, replacing the voice words with high-frequency standard voice words with pronunciation similar to the voice words.
In the embodiment of the application, after the voice data is obtained, word segmentation is performed on the voice data to obtain voice words, that is, a sentence of voice data is segmented into a plurality of words or phrases, then each word or phrase is compared with a standard voice word, and whether the similarity of the two words or phrases exceeds a preset threshold value is judged; if the standard voice words exceed a preset threshold value, replacing the voice words with the standard voice words; and if the preset threshold value is not exceeded, replacing the voice words with high-frequency standard voice words similar to the pronunciation of the voice words. For example, in pronunciation, the similarity between the word "sit" and the standard voice word "sit" is 100%, the similarity between the word "catch" and the standard voice word is 20%, the preset threshold value is 90%, and the standard voice word "sit" is used to replace the voice word "sit", so that the corresponding text data of the voice data of "sit" can be obtained as "sit". When the standard speech word is not "seated" but "captured" and "in", the frequency of use of the standard speech word is considered. Whereas "catch" is used more frequently than "go", the standard phonetic word "catch" is used instead of the phonetic word "sit".
In this embodiment of the present application, the parsing the text data to obtain the action sequence in step S3 includes the steps of:
acquiring the text data;
performing word segmentation operation on the text data to obtain text words;
judging whether the text word belongs to a preset part-of-speech text word or not;
if yes, the text data is reserved;
and if not, deleting the text data.
In the embodiment of the application, after the text data is obtained, word segmentation is performed on the text data to obtain text words, that is, a sentence of text data is segmented into a plurality of words or phrases, and then whether each word belongs to a preset part of speech is judged; if the text data belong to the preset part of speech, the text data are reserved; and if the text data does not belong to the preset part of speech, deleting the text data. For example, the preset part of speech is a verb, and the 'please' does not belong to the verb, so the 'please' can be deleted, and the 'sit' is a verb, so the 'sit' can be reserved. By the operation, unnecessary words or phrases in the text or words without actual meanings can be removed, and the text expression mode is simplified.
In this embodiment of the present application, after the step of retaining the text data, the method further includes the steps of:
acquiring the text data;
performing word segmentation operation on the text data to obtain text words;
judging whether the similarity of each text word and a standard text word exceeds a preset threshold value or not;
if yes, replacing the text word with the standard text word;
and if not, replacing the text words with high-frequency standard text words similar to the text words.
In the embodiment of the application, after the text data is obtained, word segmentation is performed on the text data to obtain text words, that is, a sentence of text data is segmented into a plurality of words or phrases, then each word or phrase is compared with a standard text word, and whether the similarity of the two words or phrases exceeds a preset threshold value is judged; if the standard text word exceeds a preset threshold value, replacing the voice word with the standard text word; and if the preset threshold value is not exceeded, replacing the text word with a high-frequency standard text word similar to the text word. For example, the text data "spoon" has multiple meanings, and can represent tableware and also can represent stupid eggs. In terms of meaning expression, the similarity between the 'spoon' and the standard speech word 'egg-shaped' is 90%, and the preset threshold value is 80%, so that the standard text word 'egg-shaped' is used to replace the text word 'spoon'. When the standard text words have no stupid eggs but have foolproof and stiff characters, the use frequency of the standard text words is considered at this time. And the 'fool' is used more frequently than the 'fool', so that the standard text word 'fool' is used to replace the text word 'spoon'.
In this embodiment of the present application, the playing of the sign language animation corresponding to the motion sequence in step S4 includes the steps of:
acquiring each action operation in the action sequence;
acquiring sign language animation actions corresponding to each action operation;
sequencing each sign language animation action according to the relative sequence of each action operation in the action sequence;
and playing each sign language animation action according to the sequence.
In the embodiment of the application, after each action operation in the action sequence is obtained, the sign language animation action corresponding to each action operation is obtained, each sign language animation action is sequenced according to the relative sequence of each action operation in the action sequence, and finally each sign language animation action is played according to the sequence. For example, the action operations in the action sequence are: if the user sits or stands, the corresponding gesture language animation actions correspondingly sequentially comprise: and (3) sitting and standing, and then playing the gesture language animation actions of sitting and standing in sequence according to the time sequence.
In this embodiment of the present application, the obtaining of the sign language animation action corresponding to each of the action operations includes:
judging whether the sign language animation action corresponding to the action operation is preset or not;
if so, calling the sign language animation action;
and if not, creating a sign language animation action corresponding to the action operation.
In the embodiment of the application, when the sign language animation action is used, whether the sign language animation action corresponding to the action operation is stored in the mobile terminal in advance needs to be judged in advance, and if the sign language animation action is stored in the mobile terminal in advance, the sign language animation action can be directly called; if not, a sign language animation action corresponding to the action operation needs to be created.
As shown in fig. 2, in the embodiment of the present application, the present invention further provides a playing system based on the intelligent sign language translation software, where the system includes:
a voice data acquisition module 10, configured to acquire input voice data;
a voice data conversion module 20, configured to convert the voice data into text data;
a text data parsing module 30, configured to parse the text data to obtain an action sequence;
and the sign language animation playing module 40 is used for playing the sign language animation corresponding to the action sequence.
In the embodiment of the present application, the present invention provides a playing system based on the intelligent sign language translation software for translating sign language, which can play the sign language animation by using the playing method based on the intelligent sign language translation software for translating sign language.
The playing method and the playing system based on the intelligent sign language translation software have the following beneficial effects that:
(1) the sign language animation is used for playing the voice in real time, manual sign language translation is not needed, the communication cost is reduced, and the communication efficiency between the hearing-impaired person and the outside is improved;
(2) the method and the device automatically analyze the outside voice, then convert the outside voice into the text, and play the sign language animation according to the text, thereby expanding the communication range between the hearing-impaired person and the outside and greatly facilitating the hearing-impaired person.
It is to be understood that the above-described embodiments of the present invention are merely illustrative of or explaining the principles of the invention and are not to be construed as limiting the invention. Therefore, any modification, equivalent replacement, improvement and the like made without departing from the spirit and scope of the present invention should be included in the protection scope of the present invention. Further, it is intended that the appended claims cover all such variations and modifications as fall within the scope and boundaries of the appended claims or the equivalents of such scope and boundaries.

Claims (10)

1. The playing method based on the intelligent sign language translation software is characterized by comprising the following steps:
acquiring input voice data;
converting the voice data into text data;
analyzing the text data to obtain an action sequence;
and playing the sign language animation corresponding to the action sequence.
2. The playing method of claim 1, wherein the step of obtaining the inputted voice data comprises the steps of:
presetting a voice data receiving area and voice data receiving time;
judging whether voice data enter the voice data receiving area within the voice data receiving time;
if yes, receiving the voice data;
if not, outputting a first alarm signal.
3. The method for playing back sign language translation software based on intelligent translated languages according to claim 2, wherein the step of obtaining the input voice data further comprises the steps of:
presetting voice data receiving intensity;
judging whether the intensity of the voice data reaches the receiving intensity of the voice data;
if yes, receiving the voice data;
if not, outputting a second alarm signal.
4. The method for playing back sign language translation software according to claim 3, wherein said obtaining the inputted voice data further comprises the steps of:
presetting a voice data receiving format;
judging whether the format of the voice data reaches the voice data receiving format or not;
if yes, receiving the voice data;
if not, outputting a third alarm signal.
5. The method for playing back a smart sign language translation software according to claim 1, wherein said converting said voice data into text data comprises the steps of:
acquiring the voice data;
performing word segmentation operation on the voice data to obtain voice words;
judging whether the similarity of each voice word and a standard voice word exceeds a preset threshold value or not;
if yes, replacing the voice word with the standard voice word;
if not, replacing the voice words with high-frequency standard voice words with pronunciation similar to the voice words.
6. The method for playing back sign language translation software based on intelligent translated languages according to claim 1, wherein the parsing the text data to obtain the action sequence comprises the steps of:
acquiring the text data;
performing word segmentation operation on the text data to obtain text words;
judging whether the text word belongs to a preset part-of-speech text word or not;
if yes, the text data is reserved;
and if not, deleting the text data.
7. The method for playing back sign language translation software based on intelligent translated languages according to claim 6, further comprising the step of, after said retaining said text data:
acquiring the text data;
performing word segmentation operation on the text data to obtain text words;
judging whether the similarity of each text word and a standard text word exceeds a preset threshold value or not;
if yes, replacing the text word with the standard text word;
and if not, replacing the text words with high-frequency standard text words similar to the text words.
8. The method for playing back sign language interpretation software according to claim 1, wherein the step of playing back the sign language animation corresponding to the action sequence comprises the steps of:
acquiring each action operation in the action sequence;
acquiring sign language animation actions corresponding to each action operation;
sequencing each sign language animation action according to the relative sequence of each action operation in the action sequence;
and playing each sign language animation action according to the sequence.
9. The playing method of claim 8, wherein the step of obtaining the gesture language animation corresponding to each action operation comprises the steps of:
judging whether the sign language animation action corresponding to the action operation is preset or not;
if so, calling the sign language animation action;
and if not, creating a sign language animation action corresponding to the action operation.
10. Play system based on intelligent sign language translation software of translating a language, characterized in that, the system includes:
the voice data acquisition module is used for acquiring input voice data;
the voice data conversion module is used for converting the voice data into text data;
the text data analysis module is used for analyzing the text data to obtain an action sequence;
and the sign language animation playing module is used for playing the sign language animation corresponding to the action sequence.
CN202010814396.9A 2020-08-13 2020-08-13 Playing method and system based on intelligent sign language translation software Pending CN113706977A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010814396.9A CN113706977A (en) 2020-08-13 2020-08-13 Playing method and system based on intelligent sign language translation software

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010814396.9A CN113706977A (en) 2020-08-13 2020-08-13 Playing method and system based on intelligent sign language translation software

Publications (1)

Publication Number Publication Date
CN113706977A true CN113706977A (en) 2021-11-26

Family

ID=78646590

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010814396.9A Pending CN113706977A (en) 2020-08-13 2020-08-13 Playing method and system based on intelligent sign language translation software

Country Status (1)

Country Link
CN (1) CN113706977A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103956167A (en) * 2014-05-06 2014-07-30 北京邮电大学 Visual sign language interpretation method and device based on Web
WO2017121316A1 (en) * 2016-01-11 2017-07-20 陈勇 Speech converter
CN107066455A (en) * 2017-03-30 2017-08-18 唐亮 A kind of multilingual intelligence pretreatment real-time statistics machine translation system
CN109446533A (en) * 2018-09-17 2019-03-08 深圳市沃特沃德股份有限公司 The interactive mode and its device that bluetooth translator, bluetooth are translated
CN109920432A (en) * 2019-03-05 2019-06-21 百度在线网络技术(北京)有限公司 A kind of audio recognition method, device, equipment and storage medium
CN110070065A (en) * 2019-04-30 2019-07-30 李冠津 The sign language systems and the means of communication of view-based access control model and speech-sound intelligent
CN110442853A (en) * 2019-08-09 2019-11-12 深圳前海微众银行股份有限公司 Text positioning method, device, terminal and storage medium
CN110675870A (en) * 2019-08-30 2020-01-10 深圳绿米联创科技有限公司 Voice recognition method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103956167A (en) * 2014-05-06 2014-07-30 北京邮电大学 Visual sign language interpretation method and device based on Web
WO2017121316A1 (en) * 2016-01-11 2017-07-20 陈勇 Speech converter
CN107066455A (en) * 2017-03-30 2017-08-18 唐亮 A kind of multilingual intelligence pretreatment real-time statistics machine translation system
CN109446533A (en) * 2018-09-17 2019-03-08 深圳市沃特沃德股份有限公司 The interactive mode and its device that bluetooth translator, bluetooth are translated
CN109920432A (en) * 2019-03-05 2019-06-21 百度在线网络技术(北京)有限公司 A kind of audio recognition method, device, equipment and storage medium
CN110070065A (en) * 2019-04-30 2019-07-30 李冠津 The sign language systems and the means of communication of view-based access control model and speech-sound intelligent
CN110442853A (en) * 2019-08-09 2019-11-12 深圳前海微众银行股份有限公司 Text positioning method, device, terminal and storage medium
CN110675870A (en) * 2019-08-30 2020-01-10 深圳绿米联创科技有限公司 Voice recognition method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11688416B2 (en) Method and system for speech emotion recognition
KR20140105673A (en) Supporting Method And System For communication Service, and Electronic Device supporting the same
US20080059200A1 (en) Multi-Lingual Telephonic Service
CN110853615B (en) Data processing method, device and storage medium
US20100217591A1 (en) Vowel recognition system and method in speech to text applictions
CA2416592A1 (en) Method and device for providing speech-to-text encoding and telephony service
CN111986675B (en) Voice dialogue method, device and computer readable storage medium
US20020198716A1 (en) System and method of improved communication
JP2012181358A (en) Text display time determination device, text display system, method, and program
WO2006070373A2 (en) A system and a method for representing unrecognized words in speech to text conversions as syllables
CN113674746A (en) Man-machine interaction method, device, equipment and storage medium
KR20160081244A (en) Automatic interpretation system and method
CN117371459A (en) Conference auxiliary system and method based on intelligent voice AI real-time translation
CN113160821A (en) Control method and device based on voice recognition
US20170221481A1 (en) Data structure, interactive voice response device, and electronic device
JPH0965424A (en) Automatic translation system using wireless mobile terminal
JP2004015478A (en) Speech communication terminal device
CN114168104A (en) A scene text interactive understanding system for visually impaired people
CN111833865B (en) Man-machine interaction method, terminal and computer readable storage medium
CN114239610A (en) Multi-language speech recognition and translation method and related system
CN113077790B (en) Multi-language configuration method, multi-language interaction method, device and electronic equipment
CN113706977A (en) Playing method and system based on intelligent sign language translation software
KR102299571B1 (en) System and Method for Providing Simultaneous Interpretation Service for Disabled Person
CN117149965A (en) Dialogue processing method, dialogue processing device, computer equipment and computer readable storage medium
KR101233655B1 (en) Apparatus and method of interpreting an international conference based speech recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211222

Address after: 215000 No. 8, Zhujiawan street, Suzhou, Jiangsu (B5 and B6, 2nd floor, building 5)

Applicant after: Sure enough, barrier free technology (Suzhou) Co.,Ltd.

Address before: Room 307-2, floor 3, building 4, No. 209, Zhuyuan Road, high tech Zone, Suzhou, Jiangsu 215011

Applicant before: Suzhou Yunguo Xinxin film and Television Technology Co.,Ltd.

TA01 Transfer of patent application right