CN106896914A - Information conversion method and device - Google Patents
Information conversion method and device Download PDFInfo
- Publication number
- CN106896914A CN106896914A CN201710040374.XA CN201710040374A CN106896914A CN 106896914 A CN106896914 A CN 106896914A CN 201710040374 A CN201710040374 A CN 201710040374A CN 106896914 A CN106896914 A CN 106896914A
- Authority
- CN
- China
- Prior art keywords
- information
- text
- input
- output
- dynamic picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000006243 chemical reaction Methods 0.000 title claims description 14
- 230000009471 action Effects 0.000 claims description 59
- 239000013598 vector Substances 0.000 claims description 56
- 238000001514 detection method Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 206010011878 Deafness Diseases 0.000 abstract description 25
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
- G06F16/532—Query formulation, e.g. graphical querying
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a method and a device for converting information. Wherein, the method comprises the following steps: acquiring input information of a user; identifying input information to obtain first information, wherein the input information is used for expressing the first information; searching output information in a preset database according to the first information, wherein the output information is used for expressing the first information, the output information is different from the input information in a first information expression mode, and the preset database is used for storing the first information, the output information and an incidence relation between the first information and the output information; the output information is presented to the user. The invention solves the technical problem that normal people and deaf-mute can not communicate conveniently.
Description
Technical field
The present invention relates to field of information processing, in particular to the conversion method and device of a kind of information.
Background technology
All the time, all there is a problem of exchange inconvenience between normal person and deaf-mute, common normal person cannot understand
The gesture term of deaf-mute, deaf-mute cannot also hear the sound that normal person sends.Existing mobile phone is also designed to for normal
People is used, and deaf-mute cannot be exchanged by mobile phone with normal person.
For above-mentioned problem, effective solution is not yet proposed at present.
The content of the invention
The conversion method and device of a kind of information are the embodiment of the invention provides, is handed over deaf-mute with least solving normal person
Flow inconvenient technical problem.
A kind of one side according to embodiments of the present invention, there is provided conversion method of information, including:Obtain the defeated of user
Enter information;The input information is identified obtaining the first information, wherein, the input information is used to express first letter
Breath;Output information is searched in presetting database according to the first information, wherein, the output information is used to express described the
One information, the output information is different from the mode of the first information described in the input information representation, and the presetting database is used
In the incidence relation for storing the first information, the output information and the first information and the output information;Xiang Yong
Family shows the output information.
Further, the input information is gesture information, and the input information is identified obtaining first information bag
Include:Extract the action vector in the gesture information;The action vector is moved with the default of storage in the presetting database
Compare as vector, obtain with it is described action vectors match deliberate action vector, wherein, the deliberate action vector with
The dynamic picture stored in the presetting database is associated, and the dynamic picture is used to represent gesture information;Obtain and match
The dynamic picture of the deliberate action vector correlation connection for arriving;The dynamic picture that will match to is used as the described first letter
Breath.
Further, the output information is text information, searches output letter in database according to the first information
Breath includes:The text information that the dynamic picture is associated is searched in presetting database according to the default dynamic picture.
Further, the output information is text information, shows that the output information includes to user:By the word
Information shows the user;Or voice messaging is generated according to the text information, and the voice messaging is played into institute
State user.
Further, the input information includes voice messaging and text information, is identified to the input information
Before obtaining the first information, methods described includes:Detect that the input information is voice messaging or text information;Described defeated
When entering information for voice messaging;By the converting voice message into text message.
Further, the input information is identified obtaining the first information including:In extracting the text information
Keyword;The keyword is compared with the preset keyword of storage in the presetting database, is obtained and the key
The preset keyword that word matches, wherein, the grapholect information stored in the preset keyword and the presetting database
Associated, the grapholect information is used to represent the text information;Obtain related to the preset keyword for matching
The grapholect information of connection;The grapholect information being associated with the preset keyword for matching is used as institute
State the first information.
Further, the output information is dynamic picture, shows that the output information includes to user:By the dynamic
The user is given in picture presentation.
Another aspect according to embodiments of the present invention, additionally provides a kind of conversion equipment of information, including:Acquiring unit,
Input information for obtaining user;Recognition unit, for being identified obtaining the first information to the input information, wherein,
The input information is used to express the first information;Searching unit, for according to the first information in presetting database
Output information is searched, wherein, the output information is used to express the first information, the output information and the input information
The mode for expressing the first information is different, the presetting database be used to storing the first information, the output information with
And the incidence relation of the first information and the output information;Display unit, for showing the output information to user.
Further, the input information is gesture information, and the recognition unit includes:First extraction module, for carrying
Take the action vector in the gesture information;First comparison module, for by action vector and the presetting database
The deliberate action vector of storage is compared, and obtains the deliberate action vector with the action vectors match, wherein, it is described pre-
If action vector is associated with the dynamic picture stored in the presetting database, the dynamic picture is used to represent that gesture is believed
Breath;First acquisition module, the dynamic picture of the deliberate action vector correlation connection for obtaining and matching;First letter
Breath module, for the dynamic picture that will match to as the first information.
Further, the output information is text information, and the searching unit includes:Searching modul, for according to institute
State the text information that dynamic picture searches the dynamic picture association in presetting database.
Further, the output information is text information, and the display unit includes:First display module, for inciting somebody to action
The text information shows the user;Generation module, voice messaging is generated according to the text information, and by the voice
Information plays to the user.
Further, the input information includes voice messaging and text information, and described device includes:Detection unit, uses
In before being identified obtaining the first information to the input information, the detection input information is voice messaging or word
Information;Converting unit, for when the input information is voice messaging, by the converting voice message into text message.
Further, the recognition unit includes:Second extraction module, for extracting the key in the text information
Word;Second comparison module, for the keyword to be compared with the preset keyword of storage in the presetting database, obtains
To the preset keyword matched with the keyword, wherein, stored in the preset keyword and the presetting database
Grapholect information is associated, and the grapholect information is used to represent the text information;Second acquisition module, for obtaining
The grapholect information being associated with the preset keyword for matching;Second information module, for will with match
The associated grapholect information of the preset keyword as the first information.
Further, the output information is dynamic picture, and the display unit includes:Second display module, for inciting somebody to action
The dynamic picture shows the user.
In embodiments of the present invention, using the input information for obtaining user;The input information is identified to obtain
One information, wherein, the input information is used to express the first information;Looked into presetting database according to the first information
Output information is looked for, wherein, the output information is used to express the first information, the output information and the input information table
Mode up to the first information is different, the presetting database be used to storing the first information, the output information and
The incidence relation of the first information and the output information;Show the mode of the output information to user, by normal
The text information of people is identified, and its corresponding gesture information is searched in presetting database, or to the gesture of deaf-mute
Information is identified, and its corresponding text information is searched in presetting database, has been reached text information and gesture information
The mutual purpose of conversion, it is achieved thereby that the technique effect that normal person smoothly exchanges with deaf-mute, and then solve normal person and
The inconvenient technical problem of deaf-mute's exchange.
Brief description of the drawings
Accompanying drawing described herein is used for providing a further understanding of the present invention, constitutes the part of the application, this hair
Bright schematic description and description does not constitute inappropriate limitation of the present invention for explaining the present invention.In the accompanying drawings:
Fig. 1 is the flow chart of the conversion method of a kind of optional information according to embodiments of the present invention;
Fig. 2 is the schematic diagram of the conversion equipment of a kind of optional information according to embodiments of the present invention.
Specific embodiment
In order that those skilled in the art more fully understand the present invention program, below in conjunction with the embodiment of the present invention
Accompanying drawing, is clearly and completely described to the technical scheme in the embodiment of the present invention, it is clear that described embodiment is only
The embodiment of a part of the invention, rather than whole embodiments.Based on the embodiment in the present invention, ordinary skill people
The every other embodiment that member is obtained under the premise of creative work is not made, should all belong to the model of present invention protection
Enclose.
It should be noted that term " first ", " in description and claims of this specification and above-mentioned accompanying drawing
Two " it is etc. for distinguishing similar object, without for describing specific order or precedence.It should be appreciated that so using
Data can exchange in the appropriate case, so as to embodiments of the invention described herein can with except illustrating herein or
Order beyond those of description is implemented.Additionally, term " comprising " and " having " and their any deformation, it is intended that cover
Lid is non-exclusive to be included, for example, the process, method, system, product or the equipment that contain series of steps or unit are not necessarily limited to
Those steps or unit clearly listed, but may include not list clearly or for these processes, method, product
Or other intrinsic steps of equipment or unit.
According to embodiments of the present invention, there is provided a kind of embodiment of the method for the conversion method of information, it is necessary to explanation,
The step of flow of accompanying drawing is illustrated can perform in the such as one group computer system of computer executable instructions, also,
Although showing logical order in flow charts, in some cases, can be with different from shown in order execution herein
The step of going out or describe.
The flow chart of the conversion method of a kind of optional information Fig. 1 according to embodiments of the present invention, as shown in figure 1, the party
Method comprises the following steps:
Step S102, obtains the input information of user.
In embodiments of the present invention, the input information of user can be the voice messaging of normal person, text information, it is also possible to
The gesture information of deaf-mute, either voice messaging, text information or gesture information, are all a kind of expression ways of user,
Wherein, the gesture information of deaf-mute can be regarded as a kind of special language, and the embodiment of the present invention not qualifier message
Breath, the category of language of text information, i.e. input information can be voice messaging, text information, gesture information, and voice messaging,
Text information can be the various language such as Chinese, English.
Step S104, is identified obtaining the first information to input information, wherein, input information is used to express the first letter
Breath.
In embodiments of the present invention, obtain after the input information of user, be identified by input information, obtain defeated
Enter the first information expressed by information, wherein, the first information can be considered as another expression way of input information, or
Certain feature of input information.The first information is default standard information, is identified by input information, will be input into information
Turn default standard information.
Step S106, output information is searched according to the first information in presetting database, wherein, output information is used to express
The first information, output information from input the information representation first information mode it is different, presetting database for store the first information,
The incidence relation of output information and the first information and output information.
In embodiments of the present invention, the first information has incidence relation, and the first information, input information with output information
And the first information is stored in presetting database with the incidence relation of output information, after the first information is obtained, pre-
If in database, searching the first information, and the corresponding output information of the first information is found according to incidence relation.
Step S108, output information is shown to user.
In embodiments of the present invention, when voice messaging or text information of the information of input for normal person, output information
It is gesture information, when gesture information of the information of input for deaf-mute, output information is voice messaging or text information.
In embodiments of the present invention, using the input information for obtaining user;Input information is identified to obtain the first letter
Breath, wherein, input information is used to express the first information;Output information is searched in presetting database according to the first information, wherein,
Output information is used to express the first information, and output information is different from the mode of the input information representation first information, presetting database
Incidence relation for storing the first information, output information and the first information and output information;Show output information to user
Mode, be identified by the text information to normal person, and its corresponding gesture information is searched in presetting database, or
Person is identified to the gesture information of deaf-mute, and its corresponding text information is searched in presetting database, has been reached text
The purpose that word information is mutually changed with gesture information, it is achieved thereby that the technique effect that normal person smoothly exchanges with deaf-mute, enters
And solve the technical problem that normal person exchanges inconvenience with deaf-mute.
Alternatively, input information is gesture information, and input information is identified obtaining the first information to include:Extract gesture
Action vector in information;Vector will be acted to compare with the deliberate action vector of storage in presetting database, obtained and moved
Make the deliberate action vector of vectors match, wherein, deliberate action vector is related to the dynamic picture stored in presetting database
Connection, dynamic picture is used to represent gesture information;The dynamic picture of the deliberate action vector correlation connection for obtaining and matching;Will matching
The dynamic picture for arriving is used as the first information.
As a kind of alternatively implementation method of the embodiment of the present invention, when needing that deaf and dumb gesture information is changed
When, by its gesture of terminal device scans, catch both hands and obtain its motion track, and then the action of both hands is recognized, from action
In extract action vector data.Be stored with dynamic picture in presetting database, and each dynamic picture represents a gesture letter
Breath, the action vector of the gesture information is stored in presetting database as deliberate action vector, and is closed with dynamic picture
Connection.Compared by the action vector data and deliberate action vector that will extract, so as to the action arrow for obtaining and extracting
The deliberate action vector data that amount matches, and then the dynamic picture of deliberate action vector data association can be obtained.
Alternatively, output information is text information, and searching output information in database according to the first information includes:According to
Default dynamic picture searches the text information of dynamic picture association in presetting database.
When the information of input is gesture information, input information can be text information, and dynamic is got according to gesture information
After picture, in presetting database, the text information of dynamic picture association is searched, the text information is used to describe dynamic picture
The implication of represented gesture.
Alternatively, output information is text information, is included to user's displaying output information:Text information is showed into use
Family;Or voice messaging is generated according to text information, and voice messaging is played into user.
As a kind of alternatively implementation method of the embodiment of the present invention, when output information is text information, can select
By the word-information display on the screen of terminal device, or text information is all converted into voice messaging by terminal device, then
Voice messaging is played into user.
Alternatively, input information includes voice messaging and text information, and input information is being identified to obtain the first letter
Before breath, method includes:Detection input information is voice messaging or text information;When the information of input is voice messaging;Will
Converting voice message into text message.
As a kind of alternatively implementation method of the embodiment of the present invention, the voice messaging of normal person or word can be believed
Breath is converted into the gesture information that deaf-mute can recognize.Alternatively, when the information of input is voice messaging, first the voice messaging is turned
Change text information into, be identified with to text information.
Alternatively, input information is identified obtaining the first information including:Extract the keyword in text information;To close
Key word is compared with the preset keyword of storage in presetting database, obtains the preset keyword matched with keyword, its
In, preset keyword is associated with the grapholect information stored in presetting database, and grapholect information is used to represent word
Information;Obtain the grapholect information being associated with the preset keyword for matching;Will be related to the preset keyword for matching
The grapholect information of connection is used as the first information.
As a kind of alternatively implementation method of the embodiment of the present invention, when being identified to text information, word is extracted
Keyword in information.Be stored with grapholect information in presetting database, while the keyword of grapholect information is made
It is that preset keyword is stored in presetting database, and preset keyword is associated with grapholect information.By will be from text
The keyword extracted in word information with preset data for middle preset keyword is compared, so as to the key for obtaining and extracting
The preset keyword that word matches, and then get the grapholect information being associated with the preset keyword.
Alternatively, output information is dynamic picture, is included to user's displaying output information:Dynamic picture is showed into use
Family.
When the information of input is voice messaging or text information, output information can be moving picture information.Present count
Closed according to the Dynamic Graph of the grapholect information association that is also stored with storehouse, the dynamic picture is used to represent and grapholect Meaning of Information
Identical gesture, by grapholect acquisition of information dynamic picture after, dynamic picture is showed into user.
According to embodiments of the present invention, there is provided a kind of embodiment of the conversion equipment of information, Fig. 2 is implemented according to the present invention
The schematic diagram of a kind of optional transcriber of example, as shown in Fig. 2 the device includes:
Acquiring unit 210, the input information for obtaining user.
In embodiments of the present invention, the input information of user can be normal person's voice messaging, text information, it is also possible to deaf
The gesture information of mute, either voice messaging, text information or gesture information, are all a kind of expression ways of user, its
In, the gesture information of deaf-mute can be regarded as a kind of special language, and the embodiment of the present invention do not limit voice messaging,
The category of language of text information, i.e. input information can be voice messaging, text information, gesture information, and voice messaging, text
Word information can be the various language such as Chinese, English.
Recognition unit 220, for being identified obtaining the first information to input information, wherein, input information is used to express
The first information.
In embodiments of the present invention, obtain after the input information of user, be identified by input information, obtain defeated
Enter the first information expressed by information, wherein, the first information can be considered as another expression way of input information, or
Certain feature of input information.The first information is default standard information, is identified by input information, will be input into information
Turn default standard information.
Searching unit 230, for searching output information in presetting database according to the first information, wherein, output information
For expressing the first information, output information is different from the mode of the input information representation first information, and presetting database is used to store
The incidence relation of the first information, output information and the first information and output information.
In embodiments of the present invention, the first information has incidence relation, and the first information, input information with output information
And the first information is stored in presetting database with the incidence relation of output information, after the first information is obtained, pre-
If in database, searching the first information, and the corresponding output information of the first information is found according to incidence relation.
Display unit 240, for showing output information to user.
In embodiments of the present invention, when voice messaging or text information of the information of input for normal person, output information
It is gesture information, when gesture information of the information of input for deaf-mute, output information is voice messaging or text information.
In embodiments of the present invention, using the input information for obtaining user;Input information is identified to obtain the first letter
Breath, wherein, input information is used to express the first information;Output information is searched in presetting database according to the first information, wherein,
Output information is used to express the first information, and output information is different from the mode of the input information representation first information, presetting database
Incidence relation for storing the first information, output information and the first information and output information;Show output information to user
Mode, be identified by the text information to normal person, and its corresponding gesture information is searched in presetting database, or
Person is identified to the gesture information of deaf-mute, and its corresponding text information is searched in presetting database, has been reached text
The purpose that word information is mutually changed with gesture information, it is achieved thereby that the technique effect that normal person smoothly exchanges with deaf-mute, enters
And solve the technical problem that normal person exchanges inconvenience with deaf-mute.
Alternatively, input information is gesture information, and recognition unit includes:First extraction module, for extracting gesture information
In action vector;First comparison module, is carried out for that will act vector with the deliberate action vector of storage in presetting database
Compare, obtain the deliberate action vector with action vectors match, wherein, stored in deliberate action vector and presetting database
Dynamic picture is associated, and dynamic picture is used to represent gesture information;First acquisition module, it is default dynamic for what is obtained and match
Make the dynamic picture of vector correlation connection;First information module, for the dynamic picture that will match to as the first information.
As a kind of alternatively implementation method of the embodiment of the present invention, when needing that deaf and dumb gesture information is changed
When, by its gesture of terminal device scans, catch both hands and obtain its motion track, and then the action of both hands is recognized, from action
In extract action vector data.Be stored with dynamic picture in presetting database, and each dynamic picture represents a gesture letter
Breath, the action vector of the gesture information is stored in presetting database as deliberate action vector, and is closed with dynamic picture
Connection.Compared by the action vector data and deliberate action vector that will extract, so as to the action arrow for obtaining and extracting
The deliberate action vector data that amount matches, and then the dynamic picture of deliberate action vector data association can be obtained.
Alternatively, output information is text information, and searching unit includes:Searching modul, for according to dynamic picture pre-
If searching the text information of dynamic picture association in database.
When the information of input is gesture information, input information can be text information, and dynamic is got according to gesture information
After picture, in presetting database, the text information of dynamic picture association is searched, the text information is used to describe dynamic picture
The implication of represented gesture.
Alternatively, output information is text information, and display unit includes:First display module, for by text information exhibition
Show to user;Generation module, generates voice messaging, and voice messaging is played into user according to text information.
As a kind of alternatively implementation method of the embodiment of the present invention, when output information is text information, can select
By the word-information display on the screen of terminal device, or text information is all converted into voice messaging by terminal device, then
Voice messaging is played into user.
Alternatively, input information includes voice messaging and text information, and device includes:Detection unit, for input
Before information is identified obtaining the first information, detection input information is voice messaging or text information;Converting unit, is used for
When the information of input is voice messaging, text information is converted speech information into.
As a kind of alternatively implementation method of the embodiment of the present invention, the voice messaging of normal person or word can be believed
Breath is converted into the gesture information that deaf-mute can recognize.Alternatively, when the information of input is voice messaging, first the voice messaging is turned
Change text information into, be identified with to text information.
Alternatively, recognition unit includes:Second extraction module, for extracting the keyword in text information;Second compares
Module, for the preset keyword stored in keyword and presetting database to be compared, obtains matching with keyword
Preset keyword, wherein, preset keyword is associated with the grapholect information stored in presetting database, grapholect information
For representing text information;Second acquisition module, the grapholect letter that the preset keyword for obtaining with match is associated
Breath;Second information module, for the grapholect information that will be associated with the preset keyword for matching as the first information.
As a kind of alternatively implementation method of the embodiment of the present invention, when being identified to text information, word is extracted
Keyword in information.Be stored with grapholect information in presetting database, while the keyword of grapholect information is made
It is that preset keyword is stored in presetting database, and preset keyword is associated with grapholect information.By will be from text
The keyword extracted in word information with preset data for middle preset keyword is compared, so as to the key for obtaining and extracting
The preset keyword that word matches, and then get the grapholect information being associated with the preset keyword.
Alternatively, output information is dynamic picture, and display unit includes:Second display module, for by dynamic picture exhibition
Show to user.
When the information of input is voice messaging or text information, output information can be moving picture information.Present count
Closed according to the Dynamic Graph of the grapholect information association that is also stored with storehouse, the dynamic picture is used to represent and grapholect Meaning of Information
Identical gesture, by grapholect acquisition of information dynamic picture after, dynamic picture is showed into user.
The embodiments of the present invention are for illustration only, and the quality of embodiment is not represented.
In the above embodiment of the present invention, the description to each embodiment all emphasizes particularly on different fields, and does not have in certain embodiment
The part of detailed description, may refer to the associated description of other embodiment.
In several embodiments provided herein, it should be understood that disclosed technology contents, can be by other
Mode is realized.Wherein, device embodiment described above is only schematical, such as division of described unit, Ke Yiwei
A kind of division of logic function, can there is other dividing mode when actually realizing, such as multiple units or component can combine or
Person is desirably integrated into another system, or some features can be ignored, or does not perform.Another, shown or discussed is mutual
Between coupling or direct-coupling or communication connection can be the INDIRECT COUPLING or communication link of unit or module by some interfaces
Connect, can be electrical or other forms.
The unit that is illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit
The part for showing can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On unit.Some or all of unit therein can be according to the actual needs selected to realize the purpose of this embodiment scheme.
In addition, during each functional unit in each embodiment of the invention can be integrated in a processing unit, it is also possible to
It is that unit is individually physically present, it is also possible to which two or more units are integrated in a unit.Above-mentioned integrated list
Unit can both be realized in the form of hardware, it would however also be possible to employ the form of SFU software functional unit is realized.
If the integrated unit is to realize in the form of SFU software functional unit and as independent production marketing or use
When, can store in a computer read/write memory medium.Based on such understanding, technical scheme is substantially
The part for being contributed to prior art in other words or all or part of the technical scheme can be in the form of software products
Embody, the computer software product is stored in a storage medium, including some instructions are used to so that a computer
Equipment (can be personal computer, server or network equipment etc.) perform each embodiment methods described of the invention whole or
Part steps.And foregoing storage medium includes:USB flash disk, read-only storage (ROM, Read-Only Memory), arbitrary access are deposited
Reservoir (RAM, Random Access Memory), mobile hard disk, magnetic disc or CD etc. are various can be with store program codes
Medium.
The above is only the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For member, under the premise without departing from the principles of the invention, some improvements and modifications can also be made, these improvements and modifications also should
It is considered as protection scope of the present invention.
Claims (14)
1. a kind of conversion method of information, it is characterised in that including:
Obtain the input information of user;
The input information is identified obtaining the first information, wherein, the input information is used to express the first information;
Output information is searched in presetting database according to the first information, wherein, the output information is used to express described
The first information, the output information is different from the mode of the first information described in the input information representation, the presetting database
Incidence relation for storing the first information, the output information and the first information and the output information;
Show the output information to user.
2. method according to claim 1, it is characterised in that the input information is gesture information, to the input letter
Breath is identified obtaining the first information including:
Extract the action vector in the gesture information;
The action vector is compared with the deliberate action vector of storage in the presetting database, is obtained and the action
The deliberate action vector of vectors match, wherein, the Dynamic Graph stored in the deliberate action vector and the presetting database
Piece is associated, and the dynamic picture is used to represent gesture information;
The dynamic picture of the deliberate action vector correlation connection for obtaining and matching;
The dynamic picture that will match to is used as the first information.
3. method according to claim 2, it is characterised in that the output information is text information, according to described first
Information searches output information in database to be included:
The text information that the dynamic picture is associated is searched in presetting database according to the dynamic picture.
4. according to the method in claim 2 or 3, it is characterised in that the output information is text information, is shown to user
The output information includes:
The text information is showed into the user;Or
Voice messaging is generated according to the text information, and the voice messaging is played into the user.
5. method according to claim 1, it is characterised in that the input information includes voice messaging and text information,
Before being identified obtaining the first information to the input information, methods described includes:
Detect that the input information is voice messaging or text information;
When the input information is voice messaging;
By the converting voice message into text message.
6. method according to claim 5, it is characterised in that be identified obtaining first information bag to the input information
Include:
Extract the keyword in the text information;
The keyword is compared with the preset keyword of storage in the presetting database, is obtained and the keyword phase
The preset keyword of matching, wherein, the preset keyword is related to the grapholect information stored in the presetting database
Connection, the grapholect information is used to represent the text information;
Obtain the grapholect information being associated with the preset keyword for matching;
The grapholect information being associated with the preset keyword for matching is used as the first information.
7. method according to claim 5, it is characterised in that the output information is dynamic picture, and institute is shown to user
Stating output information includes:
The dynamic picture is showed into the user.
8. a kind of conversion equipment of information, it is characterised in that including:
Acquiring unit, the input information for obtaining user;
Recognition unit, for being identified obtaining the first information to the input information, wherein, the input information is used to express
The first information;
Searching unit, for searching output information in presetting database according to the first information, wherein, the output information
For expressing the first information, the output information is different from the mode of the first information described in the input information representation, institute
Presetting database is stated for storing the first information, the output information and the first information and the output information
Incidence relation;
Display unit, for showing the output information to user.
9. device according to claim 8, it is characterised in that the input information is gesture information, the recognition unit
Including:
First extraction module, for extracting the action vector in the gesture information;
First comparison module, for the action vector to be compared with the deliberate action vector of storage in the presetting database
It is right, the deliberate action vector with the action vectors match is obtained, wherein, the deliberate action vector and the preset data
The dynamic picture stored in storehouse is associated, and the dynamic picture is used to represent gesture information;
First acquisition module, the dynamic picture of the deliberate action vector correlation connection for obtaining and matching;
First information module, for the dynamic picture that will match to as the first information.
10. device according to claim 9, it is characterised in that the output information is text information, the searching unit
Including:
Searching modul, for searching the word that the dynamic picture is associated in presetting database according to the dynamic picture
Information.
11. device according to claim 9 or 10, it is characterised in that the output information is text information, the displaying
Unit includes:
First display module, for the text information to be showed into the user;
Generation module, generates voice messaging, and the voice messaging is played into the user according to the text information.
12. devices according to claim 8, it is characterised in that the input information includes voice messaging and text information,
Described device includes:
Detection unit, for before being identified obtaining the first information to the input information, detects that the input information is
Voice messaging or text information;
Converting unit, for when the input information is voice messaging, by the converting voice message into text message.
13. devices according to claim 12, it is characterised in that the recognition unit includes:
Second extraction module, for extracting the keyword in the text information;
Second comparison module, for the keyword to be compared with the preset keyword of storage in the presetting database,
The preset keyword matched with the keyword is obtained, wherein, stored in the preset keyword and the presetting database
Grapholect information be associated, the grapholect information be used for represent the text information;
Second acquisition module, the grapholect information that the preset keyword for obtaining with match is associated;
Second information module, for the grapholect information that will be associated with the preset keyword for matching as institute
State the first information.
14. devices according to claim 12, it is characterised in that the output information is dynamic picture, the displaying list
Unit includes:
Second display module, for the dynamic picture to be showed into the user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710040374.XA CN106896914A (en) | 2017-01-17 | 2017-01-17 | Information conversion method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710040374.XA CN106896914A (en) | 2017-01-17 | 2017-01-17 | Information conversion method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106896914A true CN106896914A (en) | 2017-06-27 |
Family
ID=59197962
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710040374.XA Pending CN106896914A (en) | 2017-01-17 | 2017-01-17 | Information conversion method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106896914A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107316516A (en) * | 2017-07-12 | 2017-11-03 | 马永怡 | A kind of e-learning course delivery platform |
CN108766127A (en) * | 2018-05-31 | 2018-11-06 | 京东方科技集团股份有限公司 | Sign language exchange method, unit and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103425244A (en) * | 2012-05-16 | 2013-12-04 | 意法半导体有限公司 | Gesture recognition |
CN104583902A (en) * | 2012-08-03 | 2015-04-29 | 科智库公司 | Improved identification of a gesture |
KR20150089368A (en) * | 2014-01-27 | 2015-08-05 | 동국대학교 산학협력단 | Dictionary apparatus for sign language |
CN104978886A (en) * | 2015-06-29 | 2015-10-14 | 广西瀚特信息产业股份有限公司 | Sign language interpreting system based on motion sensing technology and processing method |
CN105487673A (en) * | 2016-01-04 | 2016-04-13 | 京东方科技集团股份有限公司 | Man-machine interactive system, method and device |
CN105593787A (en) * | 2013-06-27 | 2016-05-18 | 视力移动科技公司 | Systems and methods of direct pointing detection for interaction with digital device |
US20160232403A1 (en) * | 2015-02-06 | 2016-08-11 | King Fahd University Of Petroleum And Minerals | Arabic sign language recognition using multi-sensor data fusion |
CN106125922A (en) * | 2016-06-22 | 2016-11-16 | 齐齐哈尔大学 | A kind of sign language and spoken voice image information AC system |
-
2017
- 2017-01-17 CN CN201710040374.XA patent/CN106896914A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103425244A (en) * | 2012-05-16 | 2013-12-04 | 意法半导体有限公司 | Gesture recognition |
CN104583902A (en) * | 2012-08-03 | 2015-04-29 | 科智库公司 | Improved identification of a gesture |
CN105593787A (en) * | 2013-06-27 | 2016-05-18 | 视力移动科技公司 | Systems and methods of direct pointing detection for interaction with digital device |
KR20150089368A (en) * | 2014-01-27 | 2015-08-05 | 동국대학교 산학협력단 | Dictionary apparatus for sign language |
US20160232403A1 (en) * | 2015-02-06 | 2016-08-11 | King Fahd University Of Petroleum And Minerals | Arabic sign language recognition using multi-sensor data fusion |
CN104978886A (en) * | 2015-06-29 | 2015-10-14 | 广西瀚特信息产业股份有限公司 | Sign language interpreting system based on motion sensing technology and processing method |
CN105487673A (en) * | 2016-01-04 | 2016-04-13 | 京东方科技集团股份有限公司 | Man-machine interactive system, method and device |
CN106125922A (en) * | 2016-06-22 | 2016-11-16 | 齐齐哈尔大学 | A kind of sign language and spoken voice image information AC system |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107316516A (en) * | 2017-07-12 | 2017-11-03 | 马永怡 | A kind of e-learning course delivery platform |
CN108766127A (en) * | 2018-05-31 | 2018-11-06 | 京东方科技集团股份有限公司 | Sign language exchange method, unit and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Gao et al. | Discriminative multiple canonical correlation analysis for information fusion | |
US10445351B2 (en) | Customer support solution recommendation system | |
US10666792B1 (en) | Apparatus and method for detecting new calls from a known robocaller and identifying relationships among telephone calls | |
CN112738556B (en) | Video processing method and device | |
CN106250553A (en) | A kind of service recommendation method and terminal | |
CN111400513B (en) | Data processing method, device, computer equipment and storage medium | |
KR20060077988A (en) | Context extraction and information provision system and method in multimedia communication system | |
CN110189754A (en) | Voice interactive method, device, electronic equipment and storage medium | |
CN107333071A (en) | Video processing method and device, electronic equipment and storage medium | |
CN104468959A (en) | Method, device and mobile terminal displaying image in communication process of mobile terminal | |
CN103136321A (en) | Method and device of multimedia information processing and mobile terminal | |
CN114401431A (en) | Virtual human explanation video generation method and related device | |
CN108536414A (en) | Method of speech processing, device and system, mobile terminal | |
CN114138960A (en) | User intention identification method, device, equipment and medium | |
US20190155954A1 (en) | Cognitive Chat Conversation Discovery | |
CN105786803B (en) | translation method and translation device | |
CN106681523A (en) | Library configuration method, library configuration device and call handling method of input method | |
CN112035728B (en) | Cross-modal retrieval method and device and readable storage medium | |
CN105045882B (en) | A kind of hot word processing method and processing device | |
CN117033556A (en) | Memory preservation and memory extraction method based on artificial intelligence and related equipment | |
CN111488813B (en) | Video emotion marking method and device, electronic equipment and storage medium | |
CN106896914A (en) | Information conversion method and device | |
Lee et al. | Implementation of high performance objectionable video classification system | |
CN109725798A (en) | The switching method and relevant apparatus of Autonomous role | |
CN110309252A (en) | A kind of natural language processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170627 |