[go: up one dir, main page]

CN104239323A - Data output method and data output device - Google Patents

Data output method and data output device Download PDF

Info

Publication number
CN104239323A
CN104239323A CN201310239379.7A CN201310239379A CN104239323A CN 104239323 A CN104239323 A CN 104239323A CN 201310239379 A CN201310239379 A CN 201310239379A CN 104239323 A CN104239323 A CN 104239323A
Authority
CN
China
Prior art keywords
character set
audio frequency
entry
character
chinese
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310239379.7A
Other languages
Chinese (zh)
Other versions
CN104239323B (en
Inventor
刘晓更
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Tencent Cloud Computing Beijing Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201310239379.7A priority Critical patent/CN104239323B/en
Publication of CN104239323A publication Critical patent/CN104239323A/en
Application granted granted Critical
Publication of CN104239323B publication Critical patent/CN104239323B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/90335Query processing
    • G06F16/90344Query processing by using string matching techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/53Processing of non-Latin text
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/04Electrically-operated educational appliances with audible presentation of the material to be studied

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Educational Technology (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Educational Administration (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Document Processing Apparatus (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention discloses a data output method and a data output device and relates to the field of terminal applications with an aim to solve the problem about inconvenience for users to acquire entry information due to eye fatigue or light. The data output method includes: generating a first character set and a second character set according to language types of current entries, wherein language in the first character set is first language, language in the second character set is second language, and the second character set is used for explaining the first character set; converting the first character set to a first audio frequency for playing through a first text-to-speech (TTS) engine, wherein the first audio frequency is in the audio format of the first character set; converting the second character set to a second audio frequency for playing through a second TTS engine, wherein the second audio frequency is in the audio format of the second character set. The data output method and the data output device are mainly used during use of mobile equipment.

Description

The method that data export and device
Technical field
The present invention relates to terminal applies field, particularly relate to method and the device of the output of a kind of data.
Background technology
Along with development mobile device being applied (Application, also known as APP), word recited by the APP software on the increasing optional m igration equipment of user.
Existing reciting words software, is shown entry by screen, so that user recites.Some comparatively senior recite words software, provides the playing function of word pronunciation.Concrete implementation is: when user clicks toy trumpet button, and mobile device is play by English TTS (Text To Speech, from text to the pronunciation) pronunciation of engine to English word.
State in the process of data output in realization; inventor finds that in prior art, at least there are the following problems: entry information is all shown by screen; user often can cannot see screen clearly because of eye fatigue or light reason when reading in screen the entry information shown, thus obtains entry information to user and bring inconvenience.
Summary of the invention
The method that a kind of data provided by the invention export and device, can solve the problem that the user caused because of eye fatigue or light reason obtains entry information inconvenience.
On the one hand, the invention provides a kind of method that data export, comprising:
Generate the first character set and the second character set according to the category of language in current entry, the language in described first character set is first language, and the language in described second character set is second language, and described second character set is for explaining described first character set;
Described first character set is converted to the first audio frequency and plays from text to pronunciation (TTS) engine by first, described first audio frequency is the audio format of described first character set;
By the second tts engine, described second character set is converted to the second audio frequency and plays, described second audio frequency is the audio format of described second character set.
On the other hand, present invention also offers the device that a kind of data export, comprising:
Generation unit, for generating the first character set and the second character set according to current entry, the language in described first character set is first language, and the language in described second character set is second language, and described second character set is for explaining described first character set;
First converting unit, for being converted to first audio frequency from text to described first character set that generated by described generation unit of pronunciation (TTS) engine by first, described first audio frequency is the audio format of described first character set;
First broadcast unit, for playing described first audio frequency;
Second converting unit, for described second character set being converted to the second audio frequency by the second tts engine, described second audio frequency is the audio format of described second character set;
Second broadcast unit, for playing described second audio frequency.
The method that data provided by the invention export and device, entry can be generated the first character set and the second character set according to first language and second language, by the first tts engine, the first character set is changed, by the second tts engine, the second character set is changed.In prior art, a kind of tts engine can only carry out speech conversion to a kind of language, as the pronunciation of word, the full content in entry cannot be realized all to be converted to audio format, make the output of entry to depart from screen.In the present invention, according to the difference of category of language in entry, generate and correspond to macaronic two character set, first tts engine can be converted to the first audio frequency to first language, second tts engine can be converted to the second audio frequency to second language, and then reach voice data different language kind of information in entry being converted to correspondence, by playing the first audio frequency and the second audio frequency, can avoid by screen display entry, and then the mode be reached through beyond screen, the full content of entry is carried out the effect exported.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the process flow diagram of the method for first data output in the embodiment of the present invention;
Fig. 2 is the process flow diagram of the method for second data output in the embodiment of the present invention;
Fig. 3 is the process flow diagram of the method for the 3rd data output in the embodiment of the present invention;
Fig. 4 is the process flow diagram of the method for the 4th data output in the embodiment of the present invention;
Fig. 5 is the structural representation of the device of a data output in the embodiment of the present invention;
Fig. 6 is the structural representation of the device that in the embodiment of the present invention, another data export.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
Embodiments provide a kind of method that data export, described method is applied on mobile device (smart mobile phone, panel computer etc.), and as shown in Figure 1, described method comprises:
Step 101, generate the first character set and the second character set according to the category of language in current entry, the language in the first character set is first language, and the language in the second character set is second language, and the second character set is for explaining the first character set.
Entry is kept in vocabulary, if first language is English, second language is Chinese, and the formation of vocabulary is as shown in table 1, and the corresponding entry of every bar record, every bar entry has following field: sequence number, word, spelling, part of speech, implication." word " field in entry is defined as the first character set; Field (" spelling, part of speech, implication ") in entry except " word " field is defined as the second character set, part of speech in entry may use english abbreviation, now need the Chinese representation by part of speech table of comparisons determination part of speech, the part of speech table of comparisons is as shown in table 2.
Wherein, word is English word, and part of speech can be converted to Chinese by the part of speech table of comparisons, is spelled as English alphabet, and implication is explain for the Chinese of word.
Part of speech field (vt.) in Article 1 record in table 1 can be converted to Chinese part of speech (transitive verb) according to table 2.The first character set generated according to Article 1 record (entry) in table 1 is " abandon ", and the second character set of generation is " a, b, a, n, d, o, n, transitive verb abandon, and abandon, and abandon ".
Table 1
Table 2
Sequence number Part of speech-English Part of speech-Chinese
1 vt. Transitive verb
2 n. Noun
m prep. Preposition
Step 102, the first character set is converted to the first audio frequency and plays to pronunciation (TTS) engine by first from text, the first audio frequency is the audio format of the first character set.
Step 103, by the second tts engine the second character set be converted to the second audio frequency and play, the second audio frequency is the audio format of the second character set.
Tts engine has and English or Chinese is identified, is converted to audio frequency text and the function play.Existing tts engine can identify English and simplified form of Chinese Character.For the tts engine that can identify simplified form of Chinese Character, there is the function identifying English alphabet simultaneously, but None-identified English word.
When the first character set or the second character set being changed, if stored true man's voice packet, then search the audio frequency corresponding with the first character set or the second character set in this true man's voice packet.If do not store true man's voice packet, then the word specified according to tts engine or letter-sound are changed the first character set or the second character set.
The method that the data that the embodiment of the present invention provides export, entry can be generated the first character set and the second character set according to first language and second language, by the first tts engine, the first character set is changed, by the second tts engine, the second character set is changed.In prior art, a kind of tts engine can only carry out speech conversion to a kind of language, as the pronunciation of word, the full content in entry cannot be realized all to be converted to audio format, make the output of entry to depart from screen.In the present invention, according to the difference of category of language in entry, generate and correspond to macaronic two character set, first tts engine can be converted to the first audio frequency to first language, second tts engine can be converted to the second audio frequency to second language, and then reach voice data different language kind of information in entry being converted to correspondence, by playing the first audio frequency and the second audio frequency, can avoid by screen display entry, and then the mode be reached through beyond screen, the full content of entry is carried out the effect exported.
It should be noted that, the function of the embodiment of the present invention to tts engine does not limit, if during tts engine can realize except Chinese-English between language conversion, the present invention can realize the function that other language vocabularies export.Such as Sino-Japan, Sino-Korean, Sino-Russian etc.
The embodiment of the present invention additionally provides a kind of method that data export, as illustrating method shown in Fig. 1, as shown in Figure 2, when described first tts engine is used for English to be converted to English Phonetics, when described second tts engine is used for Chinese to be converted to Chinese speech, step 102 can be implemented by following manner:
Step 201, from current entry, obtain word, word is defined as the first character set.
Step 202, from current entry, obtain the combination of following at least one information, combination is defined as the second character set, information comprises: the spelling of part of speech, Chinese implication, word.
Further, after step 202, described method also comprises:
Step 203, from current entry, obtain and include English statement or the phrase of word, English statement or phrase are defined as three-character doctrine collection.
Step 204, from current entry, obtain the Chinese statement corresponding with English statement or phrase or phrase implication, Chinese statement or phrase are defined as the 4th character set.
After step 103, described method also comprises:
By the first tts engine described three-character doctrine collection be converted to the 3rd audio frequency and play, described 3rd audio frequency is the audio format of described three-character doctrine collection;
By the second tts engine, described 4th character set is converted to the 4th audio frequency and plays, described 4th audio frequency is the audio format of described 4th character set.
Some vocabulary is except including the field shown in table 1, as shown in table 3, also comprises: example sentence, example sentence translation field.Be the entry of 1 according to sequence number in table 3, the first character set of generation is " abandon ", and three-character doctrine collection is " I will not abandon my pursuit of anything. ", and the second character set is that " a, b, a, n, d, o, n, transitive verb abandon; Abandon, abandon ", the 4th character set is for " I can not abandon anything that I pursues.”。Play according to the order of the first character set, the second character set, three-character doctrine collection and the 4th character set when playing.
In addition, the example sentence field of table 3 may be also phrase, and word is " give ", and example sentence field contents is " give up ", and the content of now example sentence translation field is for " to abandon; Surrender; ... give; Give up ".
Table 3
The method that the data that the embodiment of the present invention provides export, can generate according to the entry in Ying-Han vocabulary can for the first audio frequency of the first tts engine identification and the 3rd audio frequency, and can for the second audio frequency of the second tts engine identification and the 4th audio frequency, the content in entry can be realized to be supplied to user by the form of audio frequency by playing the first audio frequency, the second audio frequency, the 3rd audio frequency and the 4th audio frequency, avoid using screen to export, ensure that user still can get the content of entry under the environment that light is not good enough.
The embodiment of the present invention additionally provides a kind of method that data export, as illustrating method shown in Fig. 1, as shown in Figure 3, when described first tts engine is used for Chinese to be converted to Chinese speech, when described second tts engine is used for English to be converted to English Phonetics, the described step generating the first character set and the second character set according to the category of language in current entry, can be implemented by following manner:
Step 301, from current entry, obtain Chinese character or word, Chinese character or word are defined as the first character set.
Step 302, from current entry, obtain the combination of following at least one information, combination is defined as the second character set, information comprises: the phonetic of part of speech, English implication, Chinese character.
Further, after step 302, described method also comprises:
Step 303, from current entry, obtain and include the Chinese statement of Chinese character or word, Chinese statement is defined as three-character doctrine collection.
Step 304, from current entry, obtain the English implication corresponding with Chinese statement, English implication is defined as the 4th character set.
After step 103, described method also comprises:
By the first tts engine described three-character doctrine collection be converted to the 3rd audio frequency and play, described 3rd audio frequency is the audio format of described three-character doctrine collection;
By the second tts engine, described 4th character set is converted to the 4th audio frequency and plays, described 4th audio frequency is the audio format of described 4th character set.
The method of the output that the embodiment of the present invention provides, also can be used for exporting Han-Ying vocabulary, as shown in table 4.Be the entry of 1 according to sequence number in table 4, the first character set of generation is " abandoning ", and three-character doctrine collection is for " I can not abandon anything that I pursues.", the second character set is " diu, qi, vt., abandon ", and the 4th character set is " I will not abandon my pursuit of anything. ".Play according to the order of the first character set, the second character set, three-character doctrine collection and the 4th character set when playing.
Table 4
The method that the data that the embodiment of the present invention provides export, can generate according to the entry in Han-Ying vocabulary can for the first audio frequency of the first tts engine identification and the 3rd audio frequency, and can for the second audio frequency of the second tts engine identification and the 4th audio frequency, the content in entry can be realized to be supplied to user by the form of audio frequency by playing the first audio frequency, the second audio frequency, the 3rd audio frequency and the 4th audio frequency, avoid using screen to export, ensure that user still can get the content of entry under the environment that light is not good enough.
The embodiment of the present invention additionally provides a kind of method that data export, and as further illustrating method shown in Fig. 1, before step 101, described method also comprises:
From entry database, determine entry set, from described entry set, order reads an entry as current entry, and described entry set is made up of at least one entry;
Or, from described entry database, obtain an entry as current entry.
In order to user can be facilitated to recite word, each only for user provides the word of some to recite, also can be described as the plan of reciting.Be presented as technically, from entry database, choose the entry of predetermined number or preset ratio as entry set.When choosing entry, can in order, the mode of backward or random selecting chooses.Method shown in Fig. 1 is performed for each entry order in entry set.Described entry database can be primary school's vocabulary, university's vocabulary, College English Test vocabulary etc.
The embodiment of the present invention additionally provides a kind of method that data export, and as further illustrating method shown in Fig. 1, as shown in Figure 4, described method also comprises:
The stopping instruction that step 401, reception user trigger, described stopping indicates the first audio frequency or the second audio frequency for stopping current broadcasting.
Step 402, after described first audio frequency or the second audio frequency finish, record the position of current entry.
Step 403, receive the broadcasting instruction that user triggers, continue to play entry according to the described position of preserving, or play entry from reference position, described broadcasting instruction plays entry for continuing.
If in the playing process of the first audio frequency or the second audio frequency or the 3rd audio frequency or the 4th audio frequency, the stopping instruction that user triggers, then, after above-mentioned audio frequency finishes, record the position of current entry.When user triggers broadcast button again, entry is play in the position continuation according to the entry stored, and comprising: if when 1 reception the last time stopped indicating, playing the 4th audio frequency, then playing next entry.If when the last stopping of 2 reception indicates, playing the audio frequency beyond the 4th audio frequency, then playing next audio frequency.3, next entry is play-overed.4, play from first entry.
Further, when playing above-mentioned four audio frequency, repeat playing can be carried out according to preset times (2 times).After playing current entry, determine the entry whether do not play in addition in entry set, if had, continue the entry do not play, if not or receive the stopping instruction that user triggers, then stop broadcasting.Play instruction and stop instruction can realizing setting out by the button on screen, when namely not playing, button shows " broadcasting ", if user clicks this " broadcasting ", this button display " time-out ", when user clicks " time-out ", this button shows " broadcasting " again.
The method that the data that the embodiment of the present invention provides export, can play according to preset times the audio frequency be converted to, and determine whether to continue to play according to the broadcasting situation of entry, minimizing user controls number of operations when replaying simultaneously.The stopping instruction triggering according to user and broadcasting instruction make user control to control the broadcasting of entry.
The embodiment of the present invention additionally provides the device 51 that a kind of data export, and as shown in Figure 5, the device 51 that described data export comprises:
Generation unit 511, for generating the first character set and the second character set according to the category of language in current entry, language in described first character set is first language, and the language in described second character set is second language, and described second character set is for explaining described first character set.
First converting unit 512, for being converted to first audio frequency from text to described first character set that generated by described generation unit 511 of pronunciation (TTS) engine by first, described first audio frequency is the audio format of described first character set.
First broadcast unit 513, for playing described first audio frequency.
Second converting unit 514, for described second character set being converted to the second audio frequency by the second tts engine, described second audio frequency is the audio format of described second character set.
Second broadcast unit 515, for playing described second audio frequency.
Further, described first tts engine is used for English to be converted to English Phonetics, and described second tts engine is used for Chinese to be converted to Chinese speech, and described generation unit 511 comprises the first acquisition subelement 5111 and second and obtains subelement 5112.
Described first obtains subelement 5111, for obtaining word from described current entry, described word is defined as the first character set.
Described second obtains subelement 5112, and for obtaining the combination of following at least one information from described current entry, described combination is defined as the second character set, and described information comprises: the spelling of part of speech, Chinese implication, described word.
Further, as further illustrating Fig. 5 shown device, as shown in Figure 6, described generation unit 511 comprises:
3rd obtains subelement 5113, for from described current entry, obtains the English statement or the phrase that include described word, described English statement or phrase are defined as three-character doctrine collection.
4th obtains subelement 5114, for from described current entry, obtains the Chinese statement corresponding with described English statement or phrase or phrase implication, described Chinese statement or phrase implication are defined as the 4th character set.
The device 51 that described data export also comprises: the 3rd broadcast unit 516 and the 4th broadcast unit 517.
Described first converting unit 512 also for, by the first tts engine, described three-character doctrine collection is converted to the 3rd audio frequency, described 3rd audio frequency is the audio format of described three-character doctrine collection.
Described 3rd broadcast unit 516 is also for playing described 3rd audio frequency.
Described second converting unit 514 also for, by the second tts engine, described 4th character set is converted to the 4th audio frequency, described 4th audio frequency is the audio format of described 4th character set.
Described 4th broadcast unit 517 is also for playing described 4th audio frequency.
Further, described first tts engine is used for Chinese to be converted to Chinese speech, and described second tts engine is used for English to be converted to English Phonetics,
Described first obtain subelement 5111 also for, from described current entry, obtain Chinese character or word, described Chinese character or word be defined as the first character set;
Described second obtain subelement 5112 also for, from described current entry, obtain the combination of following at least one information, described combination is defined as the second character set, described information comprises: the phonetic of part of speech, English implication, described Chinese character.
Further, described generation unit 511 comprises:
Described 3rd obtain subelement 5113 also for, from described current entry, obtain and include the Chinese statement of described Chinese character or word, described Chinese statement is defined as three-character doctrine collection;
4th obtain subelement 5114 also for, from described current entry, obtain the English implication corresponding with described Chinese statement, described English implication be defined as the 4th character set;
Described first converting unit 512 also for, by the first tts engine, described three-character doctrine collection is converted to the 3rd audio frequency, described 3rd audio frequency is the audio format of described three-character doctrine collection.
Described 3rd broadcast unit 516 is also for playing described 3rd audio frequency.
Described second converting unit 514 also for, by the second tts engine, described 4th character set is converted to the 4th audio frequency, described 4th audio frequency is the audio format of described 4th character set.
Described 4th broadcast unit 517 is also for playing described 4th audio frequency.
Further, the device 51 that described data export also comprises:
Determining unit 518, for determining entry set from entry database, from described entry set, order reads an entry as current entry, and described entry set is made up of at least one entry.
Described determining unit 518, also for obtaining an entry as current entry from described entry database.
Further, the device 51 that described data export also comprises:
Receiving element 519, for receiving the stopping instruction that user triggers, described stopping indicates the first audio frequency or the second audio frequency for stopping current broadcasting;
Storage unit 5110, for after described first audio frequency or the second audio frequency finish, stores the position of current entry;
Described receiving element 519 is also for receiving the broadcasting instruction that user triggers, and described broadcasting instruction plays entry for continuing;
Described first broadcast unit 513 also plays entry for continuing according to the described position of preserving, or plays entry from reference position.
The device that the data that the embodiment of the present invention provides export, entry can be generated the first character set and the second character set according to first language and second language, by the first tts engine, the first character set is changed, by the second tts engine, the second character set is changed.In prior art, a kind of tts engine can only carry out speech conversion to a kind of language, as the pronunciation of word, the full content in entry cannot be realized all to be converted to audio format, make the output of entry to depart from screen.In the present invention, according to the difference of category of language in entry, generate and correspond to macaronic two character set, first tts engine can be converted to the first audio frequency to first language, second tts engine can be converted to the second audio frequency to second language, and then reach voice data different language kind of information in entry being converted to correspondence, by playing the first audio frequency and the second audio frequency, can avoid by screen display entry, and then the mode be reached through beyond screen, the full content of entry is carried out the effect exported.In addition, can generate according to the entry in Ying-Han vocabulary can for the first audio frequency of the first tts engine identification and the 3rd audio frequency, and can for the second audio frequency of the second tts engine identification and the 4th audio frequency, the content in entry can be realized to be supplied to user by the form of audio frequency by playing the first audio frequency, the second audio frequency, the 3rd audio frequency and the 4th audio frequency, avoid using screen to export, ensure that user still can get the content of entry under the environment that light is not good enough.Can also generate according to the entry in Han-Ying vocabulary can for the first audio frequency of the first tts engine identification and the 3rd audio frequency, and can for the second audio frequency of the second tts engine identification and the 4th audio frequency, the content in entry can be realized to be supplied to user by the form of audio frequency by playing the first audio frequency, the second audio frequency, the 3rd audio frequency and the 4th audio frequency, avoid using screen to export, ensure that user still can get the content of entry under the environment that light is not good enough.
Those skilled in the art can be well understood to, for convenience and simplicity of description, only be illustrated with the division of above-mentioned each functional module, in practical application, can distribute as required and by above-mentioned functions and be completed by different functional modules, inner structure by device is divided into different functional modules, to complete all or part of function described above.The system of foregoing description, the specific works process of device and unit, with reference to the corresponding process in preceding method embodiment, can not repeat them here.
In several embodiments that the application provides, should be understood that, disclosed system, apparatus and method, can realize by another way.Such as, device embodiment described above is only schematic, such as, the division of described module or unit, be only a kind of logic function to divide, actual can have other dividing mode when realizing, such as multiple unit or assembly can in conjunction with or another system can be integrated into, or some features can be ignored, or do not perform.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be by some interfaces, and the indirect coupling of device or unit or communication connection can be electrical, machinery or other form.
The described unit illustrated as separating component or can may not be and physically separates, and the parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of unit wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, also can be that the independent physics of unit exists, also can two or more unit in a unit integrated.Above-mentioned integrated unit both can adopt the form of hardware to realize, and the form of SFU software functional unit also can be adopted to realize.
If described integrated unit using the form of SFU software functional unit realize and as independently production marketing or use time, can be stored in a computer read/write memory medium.Based on such understanding, the part that technical scheme of the present invention contributes to prior art in essence in other words or all or part of of this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprising some instructions in order to make a computer equipment (can be personal computer, server, or the network equipment etc.) or processor (processor) perform all or part of step of method described in each embodiment of the present invention.And aforesaid storage medium comprises: USB flash disk, portable hard drive, ROM (read-only memory) (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. various can be program code stored medium.
The above; be only the specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; change can be expected easily or replace, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain of described claim.

Claims (14)

1. a method for data output, is characterized in that, comprising:
Generate the first character set and the second character set according to the category of language in current entry, the language in described first character set is first language, and the language in described second character set is second language, and described second character set is for explaining described first character set;
Described first character set is converted to the first audio frequency and plays from text to pronunciation (TTS) engine by first, described first audio frequency is the audio format of described first character set;
By the second tts engine, described second character set is converted to the second audio frequency and plays, described second audio frequency is the audio format of described second character set.
2. method according to claim 1, it is characterized in that, described first tts engine is used for English to be converted to English Phonetics, described second tts engine is used for Chinese to be converted to Chinese speech, the described step generating the first character set and the second character set according to the category of language in current entry, comprising:
From described current entry, obtain word, described word is defined as the first character set;
From described current entry, obtain the combination of following at least one information, described combination is defined as the second character set, and described information comprises: the spelling of part of speech, Chinese implication, described word.
3. method according to claim 2, is characterized in that, obtain the combination of following at least one information described from described current entry, after described combination being defined as the step of the second character set, described method also comprises:
From described current entry, obtain the English statement or the phrase that include described word, described English statement or phrase are defined as three-character doctrine collection;
From described current entry, obtain the Chinese statement corresponding with described English statement or phrase or phrase implication, described Chinese statement or phrase are defined as the 4th character set;
By the second tts engine described described second character set is converted to the second audio frequency and plays, after described second audio frequency is the step of the audio format of described second character set, described method also comprises:
By the first tts engine described three-character doctrine collection be converted to the 3rd audio frequency and play, described 3rd audio frequency is the audio format of described three-character doctrine collection;
By the second tts engine, described 4th character set is converted to the 4th audio frequency and plays, described 4th audio frequency is the audio format of described 4th character set.
4. method according to claim 1, it is characterized in that, described first tts engine is used for Chinese to be converted to Chinese speech, described second tts engine is used for English to be converted to English Phonetics, the described step generating the first character set and the second character set according to the category of language in current entry, comprising:
From described current entry, obtain Chinese character or word, described Chinese character or word are defined as the first character set;
From described current entry, obtain the combination of following at least one information, described combination is defined as the second character set, and described information comprises: the phonetic of part of speech, English implication, described Chinese character.
5. method according to claim 4, is characterized in that, obtain the combination of following at least one information described from described current entry, after described combination being defined as the step of the second character set, described method also comprises:
From described current entry, obtain the Chinese statement including described Chinese character or word, described Chinese statement is defined as three-character doctrine collection;
From described current entry, obtain the English implication corresponding with described Chinese statement, described English implication is defined as the 4th character set;
By the second tts engine described described second character set is converted to the second audio frequency and plays, after described second audio frequency is the step of the audio format of described second character set, described method also comprises:
By the first tts engine described three-character doctrine collection be converted to the 3rd audio frequency and play, described 3rd audio frequency is the audio format of described three-character doctrine collection;
By the second tts engine, described 4th character set is converted to the 4th audio frequency and plays, described 4th audio frequency is the audio format of described 4th character set.
6. method according to any one of claim 1 to 5, is characterized in that, before the step generating the first character set and the second character set according to current entry, described method also comprises:
From entry database, determine entry set, from described entry set, order reads an entry as current entry, and described entry set is made up of at least one entry;
Or, from described entry database, obtain an entry as current entry.
7. method according to claim 6, is characterized in that, described method also comprises:
Receive the stopping instruction that user triggers, described stopping indicates the first audio frequency or the second audio frequency for stopping current broadcasting;
After described first audio frequency or the second audio frequency finish, record the position of current entry;
Receive the broadcasting instruction that user triggers, continue to play entry, or play entry from reference position according to the described position of preserving, described broadcasting instruction plays entry for continuing.
8. a device for data output, is characterized in that, comprising:
Generation unit, for generating the first character set and the second character set according to the category of language in current entry, language in described first character set is first language, and the language in described second character set is second language, and described second character set is for explaining described first character set;
First converting unit, for being converted to first audio frequency from text to described first character set that generated by described generation unit of pronunciation (TTS) engine by first, described first audio frequency is the audio format of described first character set;
First broadcast unit, for playing described first audio frequency;
Second converting unit, for described second character set being converted to the second audio frequency by the second tts engine, described second audio frequency is the audio format of described second character set;
Second broadcast unit, for playing described second audio frequency.
9. device according to claim 8, it is characterized in that, described first tts engine is used for English to be converted to English Phonetics, and described second tts engine is used for Chinese to be converted to Chinese speech, and described generation unit comprises the first acquisition subelement and second and obtains subelement;
Described first obtains subelement, for obtaining word from described current entry, described word is defined as the first character set;
Described second obtains subelement, and for obtaining the combination of following at least one information from described current entry, described combination is defined as the second character set, and described information comprises: the spelling of part of speech, Chinese implication, described word.
10. device according to claim 9, is characterized in that, described generation unit comprises:
3rd obtains subelement, for from described current entry, obtains the English statement or the phrase that include described word, described English statement or phrase are defined as three-character doctrine collection;
4th obtains subelement, for from described current entry, obtains the Chinese statement corresponding with described English statement or phrase or phrase implication, described Chinese statement or phrase implication are defined as the 4th character set;
Described device also comprises: the 3rd broadcast unit and the 4th broadcast unit;
Described first converting unit also for, by the first tts engine, described three-character doctrine collection is converted to the 3rd audio frequency, described 3rd audio frequency is the audio format of described three-character doctrine collection;
Described 3rd broadcast unit is also for playing described 3rd audio frequency;
Described second converting unit also for, by the second tts engine, described 4th character set is converted to the 4th audio frequency, described 4th audio frequency is the audio format of described 4th character set;
Described 4th broadcast unit is also for playing described 4th audio frequency.
11. devices according to claim 10, is characterized in that, described first tts engine is used for Chinese to be converted to Chinese speech, and described second tts engine is used for English to be converted to English Phonetics,
Described first obtain subelement also for, from described current entry, obtain Chinese character or word, described Chinese character or word be defined as the first character set;
Described second obtain subelement also for, from described current entry, obtain the combination of following at least one information, described combination is defined as the second character set, described information comprises: the phonetic of part of speech, English implication, described Chinese character.
12. devices according to claim 11, is characterized in that,
Described 3rd obtain subelement also for, from described current entry, obtain and include the Chinese statement of described Chinese character or word, described Chinese statement is defined as three-character doctrine collection;
Described 4th obtain subelement also for, from described current entry, obtain the English implication corresponding with described Chinese statement, described English implication be defined as the 4th character set;
Described first converting unit also for, by the first tts engine, described three-character doctrine collection is converted to the 3rd audio frequency, described 3rd audio frequency is the audio format of described three-character doctrine collection;
Described 3rd broadcast unit is also for playing described 3rd audio frequency;
Described second converting unit also for, by the second tts engine, described 4th character set is converted to the 4th audio frequency, described 4th audio frequency is the audio format of described 4th character set;
Described 4th broadcast unit is also for playing described 4th audio frequency.
Device according to any one of 13. according to Claim 8 to 12, is characterized in that, described device also comprises:
Determining unit, for determining entry set from entry database, from described entry set, order reads an entry as current entry, and described entry set is made up of at least one entry;
Described determining unit, also for obtaining an entry as current entry from described entry database.
14. devices according to claim 13, is characterized in that, described device also comprises:
Receiving element, for receiving the stopping instruction that user triggers, described stopping indicates the first audio frequency or the second audio frequency for stopping current broadcasting;
Storage unit, for after described first audio frequency or the second audio frequency finish, stores the position of current entry;
Described receiving element is also for receiving the broadcasting instruction that user triggers, and described broadcasting instruction plays entry for continuing;
Described first broadcast unit also plays entry for continuing according to the described position of preserving, or plays entry from reference position.
CN201310239379.7A 2013-06-17 2013-06-17 The method and device of data output Active CN104239323B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310239379.7A CN104239323B (en) 2013-06-17 2013-06-17 The method and device of data output

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310239379.7A CN104239323B (en) 2013-06-17 2013-06-17 The method and device of data output

Publications (2)

Publication Number Publication Date
CN104239323A true CN104239323A (en) 2014-12-24
CN104239323B CN104239323B (en) 2019-03-26

Family

ID=52227413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310239379.7A Active CN104239323B (en) 2013-06-17 2013-06-17 The method and device of data output

Country Status (1)

Country Link
CN (1) CN104239323B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228972A (en) * 2016-07-08 2016-12-14 北京光年无限科技有限公司 Multi-language text towards intelligent robot system mixes reads aloud method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5526259A (en) * 1990-01-30 1996-06-11 Hitachi, Ltd. Method and apparatus for inputting text
CN101944092A (en) * 2009-07-06 2011-01-12 英业达股份有限公司 Word pronunciation system and method thereof
CN102324995A (en) * 2011-04-20 2012-01-18 铁道部运输局 Speech broadcasting method and system
CN102881192A (en) * 2012-09-04 2013-01-16 广东群兴玩具股份有限公司 Chinese character writing learning system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5526259A (en) * 1990-01-30 1996-06-11 Hitachi, Ltd. Method and apparatus for inputting text
CN101944092A (en) * 2009-07-06 2011-01-12 英业达股份有限公司 Word pronunciation system and method thereof
CN102324995A (en) * 2011-04-20 2012-01-18 铁道部运输局 Speech broadcasting method and system
CN102881192A (en) * 2012-09-04 2013-01-16 广东群兴玩具股份有限公司 Chinese character writing learning system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228972A (en) * 2016-07-08 2016-12-14 北京光年无限科技有限公司 Multi-language text towards intelligent robot system mixes reads aloud method and system

Also Published As

Publication number Publication date
CN104239323B (en) 2019-03-26

Similar Documents

Publication Publication Date Title
US8606581B1 (en) Multi-pass speech recognition
JP2018532165A (en) Learning personalized entity pronunciation
CN107507615A (en) Interface intelligent interaction control method, device, system and storage medium
US20160019816A1 (en) Language Learning Tool
US10140976B2 (en) Discriminative training of automatic speech recognition models with natural language processing dictionary for spoken language processing
CN109741735B (en) Modeling method, acoustic model acquisition method and acoustic model acquisition device
CN105760356B (en) Automatic generation method and system for English word dictation question alternative options
McTear et al. Voice application development for Android
CN101158947A (en) Method and apparatus for machine translation
CN103714048A (en) Method and system used for revising text
CN112309393B (en) Method and system for domain-specific correction of automatic speech recognition output
US20070083369A1 (en) Generating words and names using N-grams of phonemes
Bali et al. A hindi speech recognizer for an agricultural video search application
JP2013025648A (en) Interaction device, interaction method and interaction program
Ciobanu et al. Speech recognition and synthesis technologies in the translation workflow
WO2015171154A1 (en) Methods and apparatus for speech recognition using a garbage model
JP2019185737A (en) Search method and electronic device using the same
CN108831503B (en) Spoken language evaluation method and device
CN111782779B (en) Voice question-answering method, system, mobile terminal and storage medium
CN105404694A (en) Automatic generation method, generation device and equipment of English learning reading materials
CN112837675B (en) Voice recognition method, device, related system and equipment
CN104239323A (en) Data output method and data output device
DE102019007797B4 (en) Matching voice commands while testing voice-assisted app prototypes for languages with non-phonetic alphabets
US10102203B2 (en) Method for writing a foreign language in a pseudo language phonetically resembling native language of the speaker
US11361761B2 (en) Pattern-based statement attribution

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240104

Address after: 518000 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 Floors

Patentee after: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

Patentee after: TENCENT CLOUD COMPUTING (BEIJING) Co.,Ltd.

Address before: 2, 518000, East 403 room, SEG science and Technology Park, Zhenxing Road, Shenzhen, Guangdong, Futian District

Patentee before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

TR01 Transfer of patent right