[go: up one dir, main page]

CN114495939B - A neural speech transliteration method based on distinctive features - Google Patents

A neural speech transliteration method based on distinctive features Download PDF

Info

Publication number
CN114495939B
CN114495939B CN202111610301.2A CN202111610301A CN114495939B CN 114495939 B CN114495939 B CN 114495939B CN 202111610301 A CN202111610301 A CN 202111610301A CN 114495939 B CN114495939 B CN 114495939B
Authority
CN
China
Prior art keywords
ipa
final
sequence
initial consonant
distinctive features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111610301.2A
Other languages
Chinese (zh)
Other versions
CN114495939A (en
Inventor
郭宇航
王志鹏
陈朔鹰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202111610301.2A priority Critical patent/CN114495939B/en
Publication of CN114495939A publication Critical patent/CN114495939A/en
Application granted granted Critical
Publication of CN114495939B publication Critical patent/CN114495939B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/24Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being the cepstrum
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • G10L2015/025Phonemes, fenemes or fenones being the recognition units
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L2015/086Recognition of spelled words

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Artificial Intelligence (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Machine Translation (AREA)

Abstract

本发明涉及一种基于区别性特征的神经语音音译方法,属于语音识别技术领域。本方法首先用端到端的语音识别技术识别音频,然后基于区别性特征对识别序列进行规则性的转换。其中,端到端语音识别部分接收英语语音并识别出对应的国际音标IPA序列;规则部分根据区别性特征将英文的IPA序列转换为最相近的汉语拼音的声母/韵母IPA序列,然后根据IPA和拼音的映射规则将IPA序列转换为拼音序列。本方法将区别性特征和端到端语音识别技术相结合,不仅对跨国交流和英语学习有帮助,还能在生活中给人提供乐趣,具有很好的实际应用性。

The present invention relates to a neural speech transliteration method based on distinctive features, and belongs to the field of speech recognition technology. The method first recognizes audio using end-to-end speech recognition technology, and then performs regular conversion on the recognition sequence based on distinctive features. Among them, the end-to-end speech recognition part receives English speech and recognizes the corresponding International Phonetic Alphabet IPA sequence; the rule part converts the English IPA sequence into the closest Chinese Pinyin initial/final IPA sequence according to the distinctive features, and then converts the IPA sequence into a Pinyin sequence according to the mapping rules of IPA and Pinyin. The method combines distinctive features with end-to-end speech recognition technology, which is not only helpful for cross-border communication and English learning, but also provides fun for people in life, and has good practical applicability.

Description

Neural voice transliteration method based on distinguishing characteristics
Technical Field
The invention relates to a neural voice transliteration method based on distinguishing characteristics, and belongs to the technical field of voice recognition processing.
Technical Field
With the rapid development of computer technology, the computer technology is applied to various fields of society, and meanwhile, the problems of difficult processing of massive voice data, difficult man-machine interaction and the like are also generated.
The goal of speech recognition is to automatically convert human speech content into text by a computer. With further developments in artificial intelligence and deep learning techniques, speech recognition techniques have made significant progress. Most of the existing speech recognition technologies adopt a deep learning-based method. Wherein, the distinguishing features are various features which can distinguish language units based on the natural features of the voice. Typically for a phoneme, this feature allows a vector representation to be constructed for the phoneme, depending on whether it has the feature alignment labeled "+" or "-".
Transliteration is a well-known concept and has wide application in life. Currently, transliteration research is focused mainly on text, usually transliteration for place names, person names and some proper nouns in machine translation. Transliteration of entire text, particularly speech-to-text, is very rare. Currently, transliteration methods are mainly based on rule methods, and English sequences are mapped into Chinese character sequences through comparison of English phonemes and Chinese pinyin, or English words are mapped into Chinese characters directly according to written rules, however, the mapping mode is harder in some generated texts, and the texts are processed.
Although the application scene of the voice transliteration is not as many as the application scene of the text transliteration, the technology can provide convenience for the daily life of people. For example, in English learning, phonetic transliteration is often an effective mnemonic. Voice transliteration can also facilitate temporary country-to-country communication for people, for example, voice transliteration for teaching audio can help memory. In addition, voice transliteration can bring some fun to people in life.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention provides a new technical approach for researching and developing text transliteration and creatively provides a neural voice transliteration method based on distinguishing characteristics in order to solve the defects of transliteration methods in the voice field.
The invention has the innovation point that the neural voice recognition method is combined with the distinguishing characteristic for the first time, so that transliteration from English voice to Chinese phonetic text is realized. Among them, the bridge between neural speech recognition and rule processing based on distinguishing features is IPA (international phonetic symbol, international Phonetic Alphabet). And in the process of rule processing, the similar IPAs are replaced by calculating the similarity between the IPAs, and finally, the IPAs and pinyin mapping rules are converted into pinyin sequences.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
A neural speech transliteration method based on distinguishing features, comprising the following steps:
firstly, converting an English text sequence corresponding to a training set into an IPA sequence, and training a neural voice recognition model by using the converted data set. The neural voice recognition model can adopt a transducer model.
And then inputting the voice characteristics to be transliterated into a trained neural voice recognition model to obtain an IPA sequence corresponding to English voice.
And simultaneously, the corresponding IPA is found out from the initial consonant/vowel of the Chinese pinyin, and the vector representation of the initial consonant/vowel is determined according to the distinguishing characteristics.
Then, for each IPA vector of the output sequence, the Euclidean distance between the IPA vector and the initial consonant and the final of each Chinese pinyin is calculated, so that the similarity between two IPA characters is measured. And replacing the initial consonant/final IPA character with the closest Chinese phonetic initial consonant/final IPA character to the English IPA character in the output sequence to obtain the IPA sequence of the initial consonant/final of the Chinese phonetic.
Finally, according to the initial consonant/final of the Pinyin and the IPA mapping rule thereof, IPA is replaced by the corresponding initial consonant or final to obtain the initial consonant/final sequence of the Pinyin. And combining the initial and final sequences to obtain the final output pinyin sequence.
If the sound is followed by the vowel, the blank spaces between the sound and the vowel are removed and combined together, for example, h ao is combined to form hao, and if the sound is followed by the sound, the blank spaces between the sound and the vowel are removed and combined together, for example, k s is combined to form ks.
Advantageous effects
The method combines the advanced speech recognition technology and the distinguishing characteristics in the linguistics, and fills the blank in the field of speech transliteration. Meanwhile, the phonetic transliteration is converted into the pinyin sequence, so that the problem of hard pronunciation after transliteration is relieved. Some other improvements, such as "k s", are not output as "ke si", but are incorporated as "ks", which is also easily understood by people in native language of chinese, and greatly shortens the length of the transliterated sequence. The pinyin sequence is used as the final output instead of the IPA sequence, which facilitates understanding of transliteration results by people in Chinese as a native language, and does not need to learn international phonetic symbols.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
FIG. 2 is a partial screenshot of an IPA-distinguishing feature annotation table used in the distinguishing feature rules section of the present method.
Fig. 3 is a diagram of an initial (right)/final (left) screenshot used by the discriminating characteristic rule section in the method.
Fig. 4 is a diagram showing the result of data processing before training a neural speech recognition model according to the method.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings.
As shown in fig. 1, a neural voice transliteration method based on distinguishing characteristics comprises the following steps:
and 1, processing the data set and converting the English text sequence of the text label into an IPA sequence. Sentence representation before and after conversion is shown in fig. 4. The data set is used for training a nerve voice recognition model, and a transducer model can be selected as the nerve voice recognition model.
The waveform file of the audio signal of the voice clip is subjected to feature extraction, and the feature extraction method can adopt MFCC (Mel-frequency cepstral coefficients), or other audio feature extraction methods.
After the audio signal is subjected to feature extraction, a two-dimensional array of feature values is obtained, wherein the first dimension represents time and the second dimension represents frequency. In the time dimension, each time step t is characterized by x t,xt being a vector.
And 2, inputting the extracted characteristic sequence into a neural voice recognition model, and recognizing a corresponding IPA sequence.
And 3, determining vector representation of each IPA character of the output sequence according to the distinguishing characteristics. Wherein, as shown in fig. 2, each IPA character corresponds to a label on each distinguishing feature. "+" is denoted by the number 1, "-" is denoted by the number-1, and "0" is denoted by the number 0. Thus, each IPA character is represented as a vector containing only 0, -1, 1.
Meanwhile, according to the distinguishing characteristics, the vector representation of the initial consonant/final of each Chinese pinyin is determined. The vector representation calculation of the initials/finals of the Chinese pinyin is the same as that of the step 4. As shown in fig. 3, part of the initials and finals are represented by a plurality of IPA characters, and the vector of each IPA character is averaged to obtain the vector of the initials/finals when calculating the vector representation of the initials/finals.
And 4, calculating the distance between each IPA character and each initial consonant/final in the output sequence, and replacing the IPA character in the output sequence with the closest initial consonant/final IPA.
The contribution of each distinguishing characteristic is different, so that the weighted Euclidean distance is adopted when the distance between vectors is calculated, the weights of syllables, consonants, ringing, passions, vocal sounds, glottal extension, glottal contraction and harshness are reduced, and the weights of the tongue tip, the tongue surface and the tongue body are increased. And replacing each IPA character in the output sequence with the closest initial consonant or final IPA according to the obtained distance, thereby obtaining the IPA sequence of the Chinese pinyin.
And 5, representing the output sequence as an initial consonant/final sound sequence according to the Pinyin IPA sequence from the step 4.
After the conversion is completed, the words can be separated by "/", and each IPA character in the words can be separated by a space. The Chinese pinyin and its IPA correspondence is shown in FIG. 3.
And 6, performing post-treatment on the obtained initial consonant/final sound sequence.
Wherein, the initial consonant and the final sound behind the initial consonant are combined in a word, and the final pinyin sequence is obtained by combining the initial consonant and the final sound behind the initial consonant.
Result verification
The method is applied to English voice translation, the English data set is TEDLIUMv < 2 >, and the transliteration result is shown in table 1.
TABLE 1 English phonetic transliteration results
Therefore, the method can complete basic transliteration operation, the sentence length of the transliteration result is moderate, and the pinyin formed by combining the initial consonants and the final consonants can be combined into non-standard pinyin, such as 'rei', which is more flexible than pronunciation represented by standard pinyin and specific Chinese characters. The output is pinyin, which is closer to Chinese, and the international phonetic symbols do not need to be learned.

Claims (5)

1.一种基于区别性特征的神经语音音译方法,其特征在于,包括以下步骤:1. A neural speech transliteration method based on distinctive features, characterized in that it comprises the following steps: 步骤1:将训练集对应的英文文本序列转换为国际音标IPA序列,用转换后的数据集训练神经语音识别模型;Step 1: Convert the English text sequence corresponding to the training set into the International Phonetic Alphabet (IPA) sequence, and use the converted data set to train the neural speech recognition model; 步骤2:将需要音译的语音特征输入训练好的神经语音识别模型,得到英语语音对应的IPA序列;Step 2: Input the speech features to be transliterated into the trained neural speech recognition model to obtain the IPA sequence corresponding to the English speech; 步骤3:根据区别性特征,确定输出序列每个英文IPA字符的向量表示;同时,将汉语拼音的声母/韵母找到其对应的IPA,根据区别性特征确定该声母/韵母的向量表示;Step 3: Determine the vector representation of each English IPA character in the output sequence based on the distinctive features; at the same time, find the corresponding IPA for the initial consonant/final vowel of the Chinese pinyin, and determine the vector representation of the initial consonant/final vowel based on the distinctive features; 步骤4:对输出序列的每个IPA向量,计算它和每个汉语拼音的声母和韵母的欧氏距离,以此衡量两个IPA字符之间的相似性;用与输出序列中的英文IPA字符最相近的汉语拼音声母/韵母IPA字符来替换,得到汉语拼音的声母/韵母的IPA序列;Step 4: For each IPA vector in the output sequence, calculate the Euclidean distance between it and each Chinese Pinyin initial consonant and final vowel to measure the similarity between two IPA characters; replace the Chinese Pinyin initial consonant/final vowel IPA character that is closest to the English IPA character in the output sequence to obtain the IPA sequence of Chinese Pinyin initial consonants/final vowels; 步骤5:根据汉语拼音的声母/韵母及其IPA映射规则,将IPA替换为对应的声母或韵母,得到拼音的声母/韵母序列;将声母韵母序列进行组合,得到最后输出的拼音序列;Step 5: According to the initial consonant/final vowel of Pinyin and its IPA mapping rule, IPA is replaced with the corresponding initial consonant or final vowel to obtain the initial consonant/final vowel sequence of Pinyin; the initial consonant and final vowel sequences are combined to obtain the final output Pinyin sequence; 其中,如果声母后跟随的是韵母,则将二者中间的空格去掉组合到一起;如果声母后跟随的是声母,则将二者中间的空格去掉组合到一起。Among them, if the initial consonant is followed by a final, the space between the two is removed and they are combined together; if the initial consonant is followed by a final, the space between the two is removed and they are combined together. 2.如权利要求1所述的一种基于区别性特征的神经语音音译方法,其特征在于,神经语音识别模型采用transformer模型。2. A neural speech transliteration method based on distinctive features as described in claim 1, characterized in that the neural speech recognition model adopts a transformer model. 3.如权利要求1所述的一种基于区别性特征的神经语音音译方法,其特征在于,步骤3中,将每个IPA字符在每个区别性特征对应一个标注,令“+”表示为数字1,“-”表示为数字-1,“0”表示为数字0,每个IPA字符表示为一个只包含0、-1、1的向量。3. A neural speech transliteration method based on distinctive features as described in claim 1, characterized in that in step 3, each IPA character corresponds to a label in each distinctive feature, and "+" is represented by the number 1, "-" is represented by the number -1, and "0" is represented by the number 0, and each IPA character is represented by a vector containing only 0, -1, and 1. 4.如权利要求1所述的一种基于区别性特征的神经语音音译方法,其特征在于,步骤3中,对于声母和韵母由多个IPA字符表示,在计算该声母/韵母的向量表示时,将每个IPA字符的向量求平均得到该声母/韵母的向量。4. A neural speech transliteration method based on distinctive features as described in claim 1, characterized in that in step 3, when the initial consonant and the final are represented by multiple IPA characters, when calculating the vector representation of the initial consonant/final, the vector of each IPA character is averaged to obtain the vector of the initial consonant/final. 5.如权利要求1所述的一种基于区别性特征的神经语音音译方法,其特征在于,步骤4中,计算向量之间的距离时,采用加权的欧式距离,将音节、辅音、响音、通音、带声、声门延展、声门紧缩和刺耳性的权重调低;将舌尖或舌面的权重调高。5. A neural speech transliteration method based on distinctive features as described in claim 1, characterized in that in step 4, when calculating the distance between vectors, a weighted Euclidean distance is used, and the weights of syllables, consonants, sonorants, consonants, voiced sounds, glottal extension, glottal contraction and harshness are reduced; and the weight of the tip of the tongue or the surface of the tongue is increased.
CN202111610301.2A 2021-12-27 2021-12-27 A neural speech transliteration method based on distinctive features Active CN114495939B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111610301.2A CN114495939B (en) 2021-12-27 2021-12-27 A neural speech transliteration method based on distinctive features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111610301.2A CN114495939B (en) 2021-12-27 2021-12-27 A neural speech transliteration method based on distinctive features

Publications (2)

Publication Number Publication Date
CN114495939A CN114495939A (en) 2022-05-13
CN114495939B true CN114495939B (en) 2025-02-14

Family

ID=81495745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111610301.2A Active CN114495939B (en) 2021-12-27 2021-12-27 A neural speech transliteration method based on distinctive features

Country Status (1)

Country Link
CN (1) CN114495939B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102193643A (en) * 2010-03-15 2011-09-21 北京搜狗科技发展有限公司 Word input method and input method system having translation function
CN108109610A (en) * 2017-11-06 2018-06-01 芋头科技(杭州)有限公司 A kind of simulation vocal technique and simulation sonification system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5047932A (en) * 1988-12-29 1991-09-10 Talent Laboratory, Inc. Method for coding the input of Chinese characters from a keyboard according to the first phonetic symbols and tones thereof
KR101990021B1 (en) * 2015-11-11 2019-06-18 주식회사 엠글리쉬 Apparatus and method for displaying foreign language and mother language by using english phonetic symbol
CN112151005B (en) * 2020-09-28 2022-08-19 四川长虹电器股份有限公司 Chinese and English mixed speech synthesis method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102193643A (en) * 2010-03-15 2011-09-21 北京搜狗科技发展有限公司 Word input method and input method system having translation function
CN108109610A (en) * 2017-11-06 2018-06-01 芋头科技(杭州)有限公司 A kind of simulation vocal technique and simulation sonification system

Also Published As

Publication number Publication date
CN114495939A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN107103900B (en) Cross-language emotion voice synthesis method and system
CN113571037B (en) Chinese braille voice synthesis method and system
CN111640418B (en) Prosodic phrase identification method and device and electronic equipment
US6134528A (en) Method device and article of manufacture for neural-network based generation of postlexical pronunciations from lexical pronunciations
CN104217713A (en) Tibetan-Chinese speech synthesis method and device
CN112990353B (en) A method for constructing Chinese characters easily confused set based on multimodal model
CN110767213A (en) Rhythm prediction method and device
Ghai et al. Analysis of automatic speech recognition systems for indo-aryan languages: Punjabi a case study
KR101424193B1 (en) Non-direct data-based pronunciation variation modeling system and method for improving performance of speech recognition system for non-native speaker speech
CN112397056A (en) Voice evaluation method and computer storage medium
CN113539268A (en) End-to-end voice-to-text rare word optimization method
CN115547293A (en) Multi-language voice synthesis method and system based on layered prosody prediction
CN115130457B (en) Prosodic modeling method and modeling system integrating Amdo Tibetan phoneme vectors
Labied et al. Moroccan Dialect “Darija” automatic speech recognition: a survey
US11817079B1 (en) GAN-based speech synthesis model and training method
Azim et al. Large vocabulary Arabic continuous speech recognition using tied states acoustic models
CN114495939B (en) A neural speech transliteration method based on distinctive features
CN1956057B (en) A device and method for predicting speech duration based on a decision tree
CN116434780A (en) Language learning system with multiple pronunciation correction function
CN111508522A (en) Statement analysis processing method and system
Tian Data-driven approaches for automatic detection of syllable boundaries.
CN115440193A (en) Pronunciation evaluation scoring method based on deep learning
CN114566143A (en) Speech synthesis method and speech synthesis system capable of locally modifying content
CN112329581A (en) Lip language identification method based on Chinese pronunciation visual characteristics
Rudrappa et al. KHiTE: Multilingual Speech Acquisition to Monolingual Text Translation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant