[go: up one dir, main page]

CN106653055A - On-line oral English evaluating system - Google Patents

On-line oral English evaluating system Download PDF

Info

Publication number
CN106653055A
CN106653055A CN201610912307.8A CN201610912307A CN106653055A CN 106653055 A CN106653055 A CN 106653055A CN 201610912307 A CN201610912307 A CN 201610912307A CN 106653055 A CN106653055 A CN 106653055A
Authority
CN
China
Prior art keywords
module
audio frequency
time
section
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610912307.8A
Other languages
Chinese (zh)
Inventor
李曙光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Innovation Partner Education Technology Co Ltd
Original Assignee
Beijing Innovation Partner Education Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Innovation Partner Education Technology Co Ltd filed Critical Beijing Innovation Partner Education Technology Co Ltd
Priority to CN201610912307.8A priority Critical patent/CN106653055A/en
Publication of CN106653055A publication Critical patent/CN106653055A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/04Electrically-operated educational appliances with audible presentation of the material to be studied
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention relates to the field of education automation, and in particular relates to an on-line oral English evaluating system. The on-line oral English evaluating system comprises a voice preprocessing module, a convolution neural network analyzing module, and an assessment and feedback module, wherein the voice preprocessing module is used for randomly segmenting an oral English audio file to be evaluated into slices with equal length; the convolution neural network analyzing module is used for carrying out the short-time Fourier transform on the obtained audio slices so that corresponding two-dimensional time-frequency graphs are generated, and then carrying out high-level abstraction on the two-dimensional time-frequency graphs one by one so that the high-level abstraction characteristics of the audio slices are obtained; the assessment and feedback module is used for analyzing the high-level abstraction characteristics of the audio slices one by one through a machine learning model so that the score of each audio slice is obtained, and then averaging all the scores so that the final oral English evaluating score is obtained.

Description

Online English speaking assessment system
Technical field
The present invention relates to education automation field, and in particular to a kind of online English speaking assessment system.
Background technology
The product of spoken on-line testing has been occurred in that in the market, but the employing at present of these products is all such as lower section Method:Student's spoken audio is identified as into text first with speech recognition technology, then the text to recognizing carries out signature analysis, most Provide student's spoken language assessment result with machine learning algorithm afterwards.The method greatest problem is from speech recognition period and follow-up The signature analysis stage.First, high-precision English Phonetics identification engine R&D costs are expensive, at present only similar Google, IBM it The large-scale scientific & technical corporation of class or research unit just possess.Secondly, the result of speech recognition determine it is follow-up all, but current English Language speech recognition technology simply has enough accuracys rate in the speech recognition of pronunciation standard, and in the not accurate enough English that pronounces It is also undesirable in beginner's (such as Chinese learners) speech recognition.Finally, the signature analysis stage need Oral English Practice to teach The expert for learning examination field carrys out design feature, and this can also consume many manpower and materials, and effect is bad.
The content of the invention
Goal of the invention:The present invention makes improvement for the problem that above-mentioned prior art is present, i.e., the invention discloses online English speaking assessment system, it is not using English Phonetics technology of identification and is being independent of Oral English Practice field teaching examination expert's In the case of, the assessment to learner's Oral English Practice and marking are realized, reach accurate with the assessment of existing method same level even more high True property and robustness.
Technical scheme:Online English speaking assessment system, including with lower module:
Voice pretreatment module, for by Oral English Practice audio file random division to be evaluated be equal length section;
Convolutional neural networks analysis module, the audio frequency section to obtaining carries out Short Time Fourier Transform and generates corresponding two dimension Time-frequency figure, then one by one high-level abstractions are carried out to two-dimentional time-frequency figure, obtain the high-level abstractions feature of audio frequency section;
Assessment and feedback module, are analyzed one by one by machine learning model to the high-level abstractions feature of audio frequency section To the fraction of each audio frequency section, then all scores are taken the mean obtain final English speaking assessment fraction.
Further, random audio section when a length of 5 seconds.
Further, voice pretreatment module includes with lower module,
Speech analysis module is then right for being equal length section by Oral English Practice audio file random division to be evaluated All of audio frequency section carries out preemphasis, voice framing, adding window and end-point detection;
Voice signal processing module, for the section of all audio frequency, is sequentially completed time-domain analysis, frequency-domain analysis and cepstrum domain point Analysis;
Parameters,acoustic analysis module, is analyzed and calculates to the parameters,acoustic of audio frequency section, and parameters,acoustic includes MEL frequencies Rate cepstrum coefficient, linear prediction residue error and Line Spectral Pair coefficients;
Further, voice signal processing module is included with lower module:
Time-domain analysis module, the time domain charactreristic parameter in analysis and extraction audio frequency section;
Frequency-domain analysis module, by bandpass filter group method, Short Time Fourier Transform method, frequency domain Pitch detection, when-frequency Method for expressing, extracts frequency spectrum, power spectrum, cepstrum, the spectrum envelope of audio frequency section;
Cepstrum domain analyzing module, analyzes and extracts the cepstral domain feature parameter of audio frequency section, further by Homomorphic Processing Glottal excitation information and sound channel response message are effectively separated:Glottal excitation information be used for judge pure and impure sound, seek pitch period, Sound channel response message is used to seek formant, for the coding of voice, synthesis, identification.
Further, time domain charactreristic parameter includes short-time energy and short-time average magnitude, short-time average zero-crossing rate, in short-term Auto-correlation coefficient and short-time average magnitade difference function.
Further, convolutional neural networks analysis module includes with lower module,
Input module, by audio frequency section the two-dimentional time-frequency figure of some same scales is changed into;
Convolutional layer C1, the two-dimentional time-frequency figure that input module is obtained is rolled up by trainable wave filter and Ke Jia biasings Product, has obtained the local feature of two-dimentional time-frequency figure;
Feature Mapping figure S2, the local feature of the two-dimentional time-frequency figure extracted to convolutional layer C1 is sampled, weighted value, plus Bias to calculate image-region special characteristic maximum, characteristics of image is polymerized and is mapped;
Convolutional layer C3, the convolutional calculation for carrying out again by the characteristics of image that Feature Mapping figure S2 is obtained, obtains image Low-dimensional local feature;
Feature Mapping figure S4, samples to the characteristics of image that convolutional layer C3 is extracted, weighted value, plus biases to calculate image Region special characteristic mean value, completes the final polymerization to characteristics of image and mapping;
Output module, by the parameters,acoustic of each audio frequency section and the characteristics of image after Feature Mapping figure S4 process After combination, exported as the global feature of audio frequency section.
Beneficial effect:Online English speaking assessment system disclosed by the invention has the advantages that:
1st, low cost, without the need for relying on speech recognition technology, without the need for organizing Oral English Teaching examination in terms of expert carry out Characteristic Design;
2nd, strong robustness, without the need for relying on voice identification result, the spoken assessment to non-native English speaker personage also has high precision Rate;
3rd, extensibility is strong, as the accumulation of data can constantly carry out self-teaching, effect in the case of mass data It is splendid.
Description of the drawings
Fig. 1 is the schematic diagram of convolutional neural networks analysis module.
Specific embodiment:
The specific embodiment of the present invention is described in detail below.
Online English speaking assessment system, including with lower module:
Voice pretreatment module, for by Oral English Practice audio file random division to be evaluated be equal length section;
Convolutional neural networks analysis module, the audio frequency section to obtaining carries out Short Time Fourier Transform and generates corresponding two dimension Time-frequency figure, then one by one high-level abstractions are carried out to two-dimentional time-frequency figure, obtain the high-level abstractions feature of audio frequency section;
Assessment and feedback module, are analyzed one by one by machine learning model to the high-level abstractions feature of audio frequency section To the fraction of each audio frequency section, then all scores are taken the mean obtain final English speaking assessment fraction.
Further, random audio section when a length of 5 seconds.
Further, voice pretreatment module includes with lower module,
Speech analysis module is then right for being equal length section by Oral English Practice audio file random division to be evaluated All of audio frequency section carries out preemphasis, voice framing, adding window and end-point detection;
Voice signal processing module, for the section of all audio frequency, is sequentially completed time-domain analysis, frequency-domain analysis and cepstrum domain point Analysis;
Parameters,acoustic analysis module, is analyzed and calculates to the parameters,acoustic of audio frequency section, and parameters,acoustic includes MEL frequencies Rate cepstrum coefficient, linear prediction residue error and Line Spectral Pair coefficients;
Further, voice signal processing module is included with lower module:
Time-domain analysis module, the time domain charactreristic parameter in analysis and extraction audio frequency section;
Frequency-domain analysis module, by bandpass filter group method, Short Time Fourier Transform method, frequency domain Pitch detection, when-frequency Method for expressing, extracts frequency spectrum, power spectrum, cepstrum, the spectrum envelope of audio frequency section;
Cepstrum domain analyzing module, analyzes and extracts the cepstral domain feature parameter of audio frequency section, further by Homomorphic Processing Glottal excitation information and sound channel response message are effectively separated:Glottal excitation information be used for judge pure and impure sound, seek pitch period, Sound channel response message is used to seek formant, for the coding of voice, synthesis, identification.
Further, time domain charactreristic parameter includes short-time energy and short-time average magnitude, short-time average zero-crossing rate, in short-term Auto-correlation coefficient and short-time average magnitade difference function.
Further, as shown in figure 1, convolutional neural networks analysis module is included with lower module,
Input module, by audio frequency section the two-dimentional time-frequency figure of some same scales is changed into;
Convolutional layer C1, the two-dimentional time-frequency figure that input module is obtained is rolled up by trainable wave filter and Ke Jia biasings Product, has obtained the local feature of two-dimentional time-frequency figure;
Feature Mapping figure S2, the local feature of the two-dimentional time-frequency figure extracted to convolutional layer C1 is sampled, weighted value, plus Bias to calculate image-region special characteristic maximum, characteristics of image is polymerized and is mapped;
Convolutional layer C3, the convolutional calculation for carrying out again by the characteristics of image that Feature Mapping figure S2 is obtained, obtains image Low-dimensional local feature;
Feature Mapping figure S4, samples to the characteristics of image that convolutional layer C3 is extracted, weighted value, plus biases to calculate image Region special characteristic mean value, completes the final polymerization to characteristics of image and mapping;
Output module, by the parameters,acoustic of each audio frequency section and the characteristics of image after Feature Mapping figure S4 process After combination, exported as the global feature of audio frequency section.
Embodiments of the present invention are elaborated above.But the present invention is not limited to above-mentioned embodiment, In the ken that art those of ordinary skill possesses, can be doing on the premise of without departing from present inventive concept Go out various change.

Claims (6)

1. online English speaking assessment system, it is characterised in that include with lower module:
Voice pretreatment module, for by Oral English Practice audio file random division to be evaluated be equal length section;
Convolutional neural networks analysis module, the audio frequency section to obtaining carries out Short Time Fourier Transform and generates corresponding two-dimentional time-frequency Figure, then one by one high-level abstractions are carried out to two-dimentional time-frequency figure, obtain the high-level abstractions feature of audio frequency section;
Assessment and feedback module, are analyzed one by one to the high-level abstractions feature of audio frequency section by machine learning model and obtain every The fraction of individual audio frequency section, then all scores are taken the mean obtain final English speaking assessment fraction.
2. online English speaking assessment system according to claim 1, it is characterised in that random audio section when it is a length of 5 seconds.
3. online English speaking assessment system according to claim 1, it is characterised in that voice pretreatment module include with Lower module,
Speech analysis module, for being equal length section by Oral English Practice audio file random division to be evaluated, then to all Audio frequency section carry out preemphasis, voice framing, adding window and end-point detection;
Voice signal processing module, for the section of all audio frequency, is sequentially completed time-domain analysis, frequency-domain analysis and cepstrum domain analysis;
Parameters,acoustic analysis module, is analyzed and calculates to the parameters,acoustic of audio frequency section, and parameters,acoustic falls including MEL frequencies Spectral coefficient, linear prediction residue error and Line Spectral Pair coefficients.
4. online English speaking assessment system according to claim 3, it is characterised in that voice signal processing module includes With lower module:
Time-domain analysis module, the time domain charactreristic parameter in analysis and extraction audio frequency section;
Frequency-domain analysis module, by bandpass filter group method, Short Time Fourier Transform method, frequency domain Pitch detection, when-frequency represent Method, extracts frequency spectrum, power spectrum, cepstrum, the spectrum envelope of audio frequency section;
Cepstrum domain analyzing module, analyzes and extracts the cepstral domain feature parameter of audio frequency section, further by sound by Homomorphic Processing Door excitation information and sound channel response message effectively separate:Glottal excitation information be used for judge pure and impure sound, seek pitch period, sound channel Response message is used to seek formant, for the coding of voice, synthesis, identification.
5. online English speaking assessment system according to claim 4, it is characterised in that time domain charactreristic parameter is included in short-term Energy and short-time average magnitude, in short-term short-time average zero-crossing rate, auto-correlation coefficient and short-time average magnitade difference function.
6. online English speaking assessment system according to claim 1, it is characterised in that convolutional neural networks analysis module Including with lower module,
Input module, by audio frequency section the two-dimentional time-frequency figure of some same scales is changed into;
Convolutional layer C1, the two-dimentional time-frequency figure that input module is obtained carries out convolution by trainable wave filter and Ke Jia biasings, The local feature of two-dimentional time-frequency figure is obtained;
Feature Mapping figure S2, the local feature of the two-dimentional time-frequency figure extracted to convolutional layer C1 is sampled, weighted value, plus biasing To calculate image-region special characteristic maximum, characteristics of image is polymerized and is mapped;
Convolutional layer C3, the convolutional calculation for carrying out again by the characteristics of image that Feature Mapping figure S2 is obtained, obtains the low-dimensional of image Local feature;
Feature Mapping figure S4, samples to the characteristics of image that convolutional layer C3 is extracted, weighted value, plus biases to calculate image district Domain special characteristic mean value, completes the final polymerization to characteristics of image and mapping;
Output module, by the parameters,acoustic of each audio frequency section and the image characteristic combination after Feature Mapping figure S4 process Afterwards, exported as the global feature of audio frequency section.
CN201610912307.8A 2016-10-20 2016-10-20 On-line oral English evaluating system Pending CN106653055A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610912307.8A CN106653055A (en) 2016-10-20 2016-10-20 On-line oral English evaluating system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610912307.8A CN106653055A (en) 2016-10-20 2016-10-20 On-line oral English evaluating system

Publications (1)

Publication Number Publication Date
CN106653055A true CN106653055A (en) 2017-05-10

Family

ID=58856223

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610912307.8A Pending CN106653055A (en) 2016-10-20 2016-10-20 On-line oral English evaluating system

Country Status (1)

Country Link
CN (1) CN106653055A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107039049A (en) * 2017-05-27 2017-08-11 郑州仁峰软件开发有限公司 A kind of data assessment educational system
CN107886968A (en) * 2017-12-28 2018-04-06 广州讯飞易听说网络科技有限公司 Speech evaluating method and system
CN108364661A (en) * 2017-12-15 2018-08-03 海尔优家智能科技(北京)有限公司 Visualize speech performance appraisal procedure, device, computer equipment and storage medium
CN112133312A (en) * 2020-09-24 2020-12-25 上海松鼠课堂人工智能科技有限公司 Spoken language training method and system based on deep learning
CN113112990A (en) * 2021-03-04 2021-07-13 昆明理工大学 Language identification method of variable-duration voice based on spectrum envelope diagram
CN114571472A (en) * 2020-12-01 2022-06-03 北京小米移动软件有限公司 Ground attribute detection method and driving method for foot type robot and device thereof

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101494049A (en) * 2009-03-11 2009-07-29 北京邮电大学 Method for extracting audio characteristic parameter of audio monitoring system
CN101739868A (en) * 2008-11-19 2010-06-16 中国科学院自动化研究所 Automatic evaluation and diagnosis method of text reading level for oral test
CN101826263A (en) * 2009-03-04 2010-09-08 中国科学院自动化研究所 Objective standard based automatic oral evaluation system
CN103065626A (en) * 2012-12-20 2013-04-24 中国科学院声学研究所 Automatic grading method and automatic grading equipment for read questions in test of spoken English
CN103617799A (en) * 2013-11-28 2014-03-05 广东外语外贸大学 Method for detecting English statement pronunciation quality suitable for mobile device
CN104732977A (en) * 2015-03-09 2015-06-24 广东外语外贸大学 On-line spoken language pronunciation quality evaluation method and system
CN104992705A (en) * 2015-05-20 2015-10-21 普强信息技术(北京)有限公司 English oral automatic grading method and system
CN105825852A (en) * 2016-05-23 2016-08-03 渤海大学 Oral English reading test scoring method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101739868A (en) * 2008-11-19 2010-06-16 中国科学院自动化研究所 Automatic evaluation and diagnosis method of text reading level for oral test
CN101826263A (en) * 2009-03-04 2010-09-08 中国科学院自动化研究所 Objective standard based automatic oral evaluation system
CN101494049A (en) * 2009-03-11 2009-07-29 北京邮电大学 Method for extracting audio characteristic parameter of audio monitoring system
CN103065626A (en) * 2012-12-20 2013-04-24 中国科学院声学研究所 Automatic grading method and automatic grading equipment for read questions in test of spoken English
CN103617799A (en) * 2013-11-28 2014-03-05 广东外语外贸大学 Method for detecting English statement pronunciation quality suitable for mobile device
CN104732977A (en) * 2015-03-09 2015-06-24 广东外语外贸大学 On-line spoken language pronunciation quality evaluation method and system
CN104992705A (en) * 2015-05-20 2015-10-21 普强信息技术(北京)有限公司 English oral automatic grading method and system
CN105825852A (en) * 2016-05-23 2016-08-03 渤海大学 Oral English reading test scoring method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王乃峰: ""基于深层神经网络的音频特征提取及场景识别研究"", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 *
王玉林: ""英语口语评分系统的研究与设计"", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107039049A (en) * 2017-05-27 2017-08-11 郑州仁峰软件开发有限公司 A kind of data assessment educational system
CN108364661A (en) * 2017-12-15 2018-08-03 海尔优家智能科技(北京)有限公司 Visualize speech performance appraisal procedure, device, computer equipment and storage medium
CN107886968A (en) * 2017-12-28 2018-04-06 广州讯飞易听说网络科技有限公司 Speech evaluating method and system
CN112133312A (en) * 2020-09-24 2020-12-25 上海松鼠课堂人工智能科技有限公司 Spoken language training method and system based on deep learning
CN114571472A (en) * 2020-12-01 2022-06-03 北京小米移动软件有限公司 Ground attribute detection method and driving method for foot type robot and device thereof
CN114571472B (en) * 2020-12-01 2024-01-23 北京小米机器人技术有限公司 Ground attribute detection method and driving method for foot robot and device thereof
CN113112990A (en) * 2021-03-04 2021-07-13 昆明理工大学 Language identification method of variable-duration voice based on spectrum envelope diagram

Similar Documents

Publication Publication Date Title
CN106653055A (en) On-line oral English evaluating system
CN103559892B (en) Oral evaluation method and system
US9489864B2 (en) Systems and methods for an automated pronunciation assessment system for similar vowel pairs
CN104732977A (en) On-line spoken language pronunciation quality evaluation method and system
CN114373447A (en) A method and system for scoring Chinese-English translation questions
CN105608960A (en) Spoken language formative teaching method and system based on multi-parameter analysis
Liu et al. AI recognition method of pronunciation errors in oral English speech with the help of big data for personalized learning
Ibrahim et al. Quranic verse recitation feature extraction using Mel-frequency cepstral coefficients (MFCC)
Shufang Design of an automatic English pronunciation error correction system based on radio magnetic pronunciation recording devices
Dave et al. Speech recognition: A review
Yousfi et al. Holy Qur'an speech recognition system Imaalah checking rule for warsh recitation
CN111341346A (en) Language expression capability evaluation method and system for fusion depth language generation model
Alkhatib et al. Building an assistant mobile application for teaching arabic pronunciation using a new approach for arabic speech recognition
KR20080018658A (en) Voice comparison system for user selection section
Yousfi et al. Isolated Iqlab checking rules based on speech recognition system
CN116543760A (en) Oral English teaching evaluation method based on artificial intelligence
Adam et al. Analysis of Momentous Fragmentary Formants in Talaqi-like Neoteric Assessment of Quran Recitation using MFCC Miniature Features of Quranic Syllables
Jing et al. The speech evaluation method of English phoneme mobile learning system
Li et al. English sentence pronunciation evaluation using rhythm and intonation
Bhadra et al. Study on feature extraction of speech emotion recognition
CN117423260B (en) Auxiliary teaching method based on classroom speech recognition and related equipment
Bansod et al. Speaker Recognition using Marathi (Varhadi) Language
RU2589851C2 (en) System and method of converting voice signal into transcript presentation with metadata
Hassan et al. Pattern Classification in Recognising Idgham Maal Ghunnah Pronunciation Using Multilayer Perceptrons
Ma et al. Optimization of Computer Aided English Pronunciation Teaching System Based on Speech Signal Processing Technology.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170510