[go: up one dir, main page]

CN112309183A - Interactive listening and speaking exercise system suitable for foreign language teaching - Google Patents

Interactive listening and speaking exercise system suitable for foreign language teaching Download PDF

Info

Publication number
CN112309183A
CN112309183A CN202011263720.9A CN202011263720A CN112309183A CN 112309183 A CN112309183 A CN 112309183A CN 202011263720 A CN202011263720 A CN 202011263720A CN 112309183 A CN112309183 A CN 112309183A
Authority
CN
China
Prior art keywords
module
feedback
user
user terminal
field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011263720.9A
Other languages
Chinese (zh)
Inventor
陈昕昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Institute of Economic and Trade Technology
Original Assignee
Jiangsu Institute of Economic and Trade Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Institute of Economic and Trade Technology filed Critical Jiangsu Institute of Economic and Trade Technology
Priority to CN202011263720.9A priority Critical patent/CN112309183A/en
Publication of CN112309183A publication Critical patent/CN112309183A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

本申请公开了该适用于外语教学的交互式听说练习系统包括:用户终端和应用服务器,用户终端包括:扬声器、麦克风和处理器;处理器用于处理语音信息;第一类通讯模块,用于使处理器能与外部构成通讯连接;应用服务器包括:字段解析模块,用于分析语音信息并得出语句字段;语义分析模块,用于根据语义分析结果输出对应该语句字段的反馈字段;语句组合模块,用于根据反馈字段以及语法关系生成对应的反馈语句;语音发生模块,用于根据反馈语句生成反馈语音;第二类通讯模块,用于将反馈语音发送至用户终端。本申请的有益之处在于提供了一种能有效实现模拟真实对话场景的适用于外语教学的交互式听说练习系统。

Figure 202011263720

The present application discloses that the interactive listening and speaking practice system suitable for foreign language teaching includes: a user terminal and an application server. The user terminal includes: a speaker, a microphone and a processor; the processor is used for processing voice information; the first type of communication module is used for The processor can form a communication connection with the outside; the application server includes: a field analysis module, which is used to analyze the voice information and obtain the sentence field; the semantic analysis module is used to output the feedback field corresponding to the sentence field according to the result of the semantic analysis; the sentence combination The module is used to generate the corresponding feedback sentence according to the feedback field and the grammatical relationship; the voice generation module is used to generate the feedback voice according to the feedback sentence; the second type of communication module is used to send the feedback voice to the user terminal. The benefit of the present application is to provide an interactive listening and speaking practice system suitable for foreign language teaching that can effectively simulate a real dialogue scene.

Figure 202011263720

Description

Interactive listening and speaking exercise system suitable for foreign language teaching
Technical Field
The application relates to a listening and speaking exercise system, in particular to an interactive listening and speaking exercise system suitable for foreign language teaching.
Background
The existing listening and speaking practice system applied to foreign language teaching usually adopts a section of sound template to play, and then adopts a mode of reading with the user to train, so that the purpose of interactive training cannot be achieved, the user can only practice pronunciation according to a fixed template, and the ability of dispatching words and sentences cannot be practiced is lacked.
In other prior art schemes, spoken language dialogs are performed in a human-to-human interactive manner.
The real person interaction mode is often adopted to have requirements on both parties, and if the partner is not a teacher but a student, the partner training effect is poor; if the partner is a sophisticated teacher, the cost of implementing the exercise is high, and there is no way to equip every student with a teacher.
At present, there is no practice system for listening and speaking which can effectively improve the spoken language ability of the user.
Disclosure of Invention
In order to solve the defects in the prior art, the application provides an interactive listening and speaking exercise system suitable for foreign language teaching.
The interactive listening and speaking exercise system suitable for foreign language teaching comprises: the user terminal is used for being operated by a user to carry out listening and speaking exercises; the application server is used for forming data interaction with the user terminal so as to provide data required by listening and speaking exercises for the user terminal; wherein, user terminal includes: a speaker for outputting voice information; the microphone is used for collecting voice information; the processor is used for processing the voice information collected by the microphone and outputting the voice information through the loudspeaker; the first type communication module is used for enabling the processor to form communication connection with the outside; the application server includes: the field analysis module is used for analyzing the voice information and obtaining statement fields; the semantic analysis module is used for outputting a feedback field corresponding to the statement field according to a semantic analysis result; the sentence combination module is used for generating a corresponding feedback sentence according to the feedback field and the grammatical relation; the voice generating module is used for generating feedback voice according to the feedback statement; and the second communication module is used for sending the feedback voice to the user terminal.
Further, the application server further comprises: and the pairing module is used for pairing the two user terminals so as to enable the voice information of the two user terminals to be interacted online.
Further, the application server pairs the two user terminals through the pairing module, and the voice information of the paired user terminals is processed by the application server and then sent to the other paired user terminal.
Furthermore, the semantic analysis module comprises a feedback artificial neural network; the feedback artificial neural network comprises a plurality of semantic analysis models, and the semantic analysis models are trained by taking fields as input and output.
Further, the feedback artificial neural network outputs a corresponding feedback field and confidence when analyzing the statement field.
Further, the application server includes: the self-adaptive module is used for self-adaptively outputting the answer sentence when the confidence coefficient of the output of the feedback artificial neural network is lower than a preset value; the self-adapting module finds the field of the closest history statement stored in the database according to the statement field, and then correspondingly outputs the feedback field corresponding to the history statement field.
Further, the user terminal includes: the display module is used for displaying image information; and the simulation module is used for generating a virtual portrait for realizing conversation with a user in the display module.
Further, the user terminal further includes: the camera is used for acquiring a face image of a user; the application server further comprises: the expression recognition module is used for generating user expression data according to the face image acquired by the camera; the expression recognition module inputs the expression data of the user to the voice analysis module as input data of the feedback artificial neural network.
Further, the application server further comprises: the lip shape identification module is used for identifying the lip shape of the user according to the face image collected by the camera and generating lip shape identification data; the lip recognition module inputs the lip recognition data to the speech analysis module as an input data to which the artificial neural network is fed back.
Further, the application server further comprises: the data analysis module is used for analyzing and summarizing the condition of the voice information module of the user analyzed by the semantic analysis module; and the data analysis module sends the analysis data to the user terminal and displays the analysis data to the user through the display module.
The application has the advantages that: an interactive listening and speaking practice system suitable for foreign language teaching is provided, which can effectively realize the simulation of real dialogue scenes.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, serve to provide a further understanding of the application and to enable other features, objects, and advantages of the application to be more apparent. The drawings and their description illustrate the embodiments of the invention and do not limit it. In the drawings:
FIG. 1 is a schematic diagram of a system architecture of an interactive practice system for foreign language instruction, according to an embodiment of the present application;
FIG. 2 is a block diagram of an interactive practice system for listening and speaking for foreign language instruction according to one embodiment of the present application;
fig. 3 is a schematic diagram of a semantic analysis module in an interactive listening and speaking practice system for foreign language teaching according to an embodiment of the present application.
The meaning of the reference symbols in the figures:
a system 100 for an interactive practice system for listening and speaking for foreign language instruction;
a user terminal 200;
an application server 300.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Referring to fig. 1 to 3, an interactive listening and speaking practice system for foreign language teaching according to the present application includes: a plurality of user terminals and an application server.
Specifically, the user terminal is used for the user to operate for listening and speaking exercises. As a specific scheme, the user terminal includes: speaker, microphone, camera, treater and first type communication module.
The loudspeaker is used for outputting voice information; the microphone is used for collecting voice information; the processor is used for processing the voice information collected by the microphone and outputting the voice information through the loudspeaker. The camera is used for collecting face images of the user.
As a specific implementation scheme, the user terminal of the present application may be a smart phone, a smart tablet, or a PC. Of course, the user terminal of the present application may also be configured as a dedicated learning device.
As another part of the technical scheme of the application, the application server is used for constituting data interaction with the user terminal so as to provide the user terminal with data required by listening and speaking exercises. The application server has general server data operation and storage capacity, and specifically, the application server comprises: the system comprises a field analysis module, a semantic analysis module, a statement analysis module, a voice generation module and a second-class communication module.
The field analysis module is used for analyzing the voice information and obtaining statement fields; the semantic analysis module is used for outputting a feedback field corresponding to the statement field according to a semantic analysis result; the sentence combination module is used for generating a corresponding feedback sentence according to the feedback field and the grammatical relation; the voice generating module is used for generating feedback voice according to the feedback statement; and the second communication module is used for sending the feedback voice to the user terminal.
Specifically, the field parsing module is used for generating corresponding sentence fields according to the voice information, namely, recognizing words in the audio file. As an extension scheme, the field parsing module may be disposed in the user terminal, that is, the user terminal completes field parsing, and sends a result of the field parsing, that is, field data, to the application server for processing.
Preferably, the user terminal prompts the user to re-input the voice when any meaningful field cannot be recognized from the user voice information, and the prompt is prompted in an anthropomorphic dialogue or an image manner.
The semantic analysis module is used for analyzing the meaning of the statement field and outputting a feedback field which can correspond to the statement field, and specifically, the semantic analysis module comprises a feedback artificial neural network; the feedback artificial neural network comprises a plurality of semantic analysis models, and the semantic analysis models are trained by taking fields as input and output.
As a preferred scheme of the present application, a feedback artificial neural network is constructed, then field data of a corresponding question-answer dialog is used as training data, and contexts are respectively used as input data and output data to train the feedback artificial neural network, so that the feedback artificial neural network can output a corresponding feedback field, i.e., a response sentence, according to the input field data. The scheme is more suitable for an application scheme which definitely prescribes a listening and speaking practice scene. The trained feedback artificial neural network can perform intelligent feedback according to the input field and output the feedback field and the confidence coefficient of the feedback field. And the application server judges whether the confidence coefficient exceeds a preset value, if so, a feedback field is adopted for feedback, and if not, a self-adaptive module is adopted for feedback.
After the confidence coefficient of the feedback field exceeds a preset value, the sentence combination module adopts the feedback field to generate a corresponding feedback sentence according to the meaning, the part of speech and the grammar relation of the feedback field, the speech generation module generates feedback speech according to the feedback sentence, then the feedback speech is sent to the user terminal, and the user terminal loudspeaker pronounces the speech.
As an extension, the speech generating module may be provided in the user terminal. And when the feedback field is lower than the preset value, the current feedback field is not suitable for the current dialogue exercise. At this time, the adaptive module is responsible for feedback, specifically, the adaptive module searches the last history statement field closest to this time according to the statement field, and then correspondingly outputs the feedback field corresponding to the history statement field. If the history database has no close statement field, the reverse query statement is searched in the template library for feedback, remarking is carried out in the system, and an administrator is prompted to carry out manual processing.
Preferably, the application server further comprises: expression identification module and lip-shaped identification module.
The expression recognition module is used for generating user expression data according to the face image acquired by the camera; the expression recognition module inputs the expression data of the user to the voice analysis module as input data of the feedback artificial neural network.
The lip shape identification module is used for identifying the lip shape of the user according to the face image collected by the camera and generating lip shape identification data; the lip recognition module inputs the lip recognition data to the speech analysis module as an input data to which the artificial neural network is fed back.
The expression recognition module is mainly used for recognizing the current expression of the user so as to judge the emotion of the user and analyze the context. As a further scheme, the expression recognition module may also adopt a feedback artificial neural network for recognition, and the recognized data is input into the semantic analysis module for analysis, and the expression data of the user may be divided into: happy, general, sad, etc., and various types of data may be given with scores as degrees of distinction.
And the lip shape recognition module is used for analyzing the pronunciation of the user according to the lip shape of the user so as to assist in judging the content of the voice information of the user. The data analyzed according to the lip recognition is synchronously transmitted to the neural network for learning, so that the voice recognition accuracy can be improved according to the lip habits of the user.
As a preferred scheme of the present application, a user terminal includes: the display device comprises a display module and a simulation module, wherein the display module is used for displaying image information; the simulation module is used for generating a virtual portrait for realizing a dialogue with a user at the display module.
As an extended technical solution, the display module may further display a statement field corresponding to the user voice information and a statement field of the voice information sent by the speaker for feedback, that is, display the conversation content.
The scheme is a single-machine practice scheme of the application, namely a scheme for a user to practice listening and speaking by a single person in a learning mode.
Although the technical scheme can realize interactive man-machine conversation exercise, the mode is more suitable for a primary exercise mode due to the characteristics of the machine. I.e. there are some preset conditions for the scope of the dialog and the application scenario.
As an extension, the application server further includes: and a pairing module.
The pairing module is used for pairing the two user terminals so as to enable the voice information of the two user terminals to be interacted online. The application server pairs the two user terminals through the pairing module, and the voice information of the paired user terminals is processed by the application server and then sent to the other paired user terminal.
The pairing module is used for enabling two users needing spoken language practice to carry out interactive training.
As an extension scheme, the pairing module pairs two users, after the two users are paired, the users do not directly form a conversation through the user terminals, but the users feed back the own voice information to another user terminal after processing by a semantic analysis module and other modules in the application server, the other user terminal plays the conversation voice to the other user through a virtual portrait, the users of the other user terminal perform voice answering through the user terminals after hearing the conversation voice, and the voice information collected by the other user terminal is also fed back to the other terminal module after processing by the application server. Through the processing of the application server, two beginners can have smooth conversation practice, so that the beginners at one side of the user terminal think that the beginners are conversing with a user with more proficient spoken language, which is equivalent to that the application server analyzes the meaning expression of the user, and then the sentences conforming to the spoken language expression are obtained through corresponding algorithms, so that the spoken language level of the user at the user terminal is gradually improved.
In order to match the working mode of the pairing module, the semantic analysis module may further set a group of artificial neural networks different from the feedback artificial neural network introduced earlier, and define the artificial neural network as a forward artificial neural network, where the forward artificial neural network outputs spoken statements according to statement fields instead of feedback fields. As a preferred scheme, the application server determines whether the statement field analyzed by the field analysis module meets an expression standard pre-stored in the application server. If the expression standard is not satisfied, the input is input into the forward artificial neural network for processing, and the forward artificial neural network outputs statement information expressed based on the meaning of the user and the confidence. Similarly, a threshold determination of confidence level is required. The forward artificial neural network can learn by presetting English sentences in advance, and the specific mode is that a large number of English sentences are split into single words, then the words are used as input items of the forward artificial neural network for unordered input, and the corresponding sentences are used as output items for training the forward artificial neural network.
As a preferred scheme, the forward neural network may be used as a part of the feedback neural network, that is, the forward statement, that is, the spoken language expression statement obtained according to the statement field, is obtained first, and then the spoken language expression statement is input into a sub-artificial neural network for processing, and then the feedback statement is obtained.
Preferably, the neural network may be trained in a loop manner, the feedback statement is split, and then the forward neural network is continuously trained as training data of the normal neural network.
As an extension, when performing the pairing mode, the forward artificial neural network performs neural network model training as output data and input data of the feedback neural network (or a sub-network thereof), respectively, based on a sentence generated by the user consciousness expression and a sentence corresponding to an answer thereof. Namely, the sentence generated by inputting the sentence field by the user is used as a feedback learning material, and the neural network is trained during the pairing learning.
As a further extension, the pairing module may adopt two pairing modes. The first way is that the application server matches together users that select the same scene according to the user's selection of a voice conversation scene. The second way is that the user of the user terminal can perform voice exercise by actively adding friends, but the friends are virtual friends, and although the back of the virtual friends corresponds to real users, the application server is used as an intermediary, so fixed partners can be selected by adopting a virtual identity and friend mode to perform exercise. The third mode is that the application server analyzes the ongoing conversation theme of the user according to the semantic data of one of the users after semantic analysis, then searches other users with similar topics in the application server, and then the application server converts the self-answering into the matching user-answering.
The matching module can use the first mode and the third mode together.
As an alternative, the application server further comprises: and a data analysis module. The data analysis module is used for analyzing and summarizing the condition of the voice information module of the user analyzed by the semantic analysis module; and the data analysis module sends the analysis data to the user terminal and displays the analysis data to the user through the display module.
The data analysis module analyzes the sentence data in the voice information of the user and compares the sentence data with the sentence data processed by the artificial neural network, so that the spoken language practice condition of the user is fed back to the user. Here too, the practice situation can be analyzed by statistical question-answering validity.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1.一种适用于外语教学的交互式听说练习系统,其特征在于:1. an interactive listening and speaking practice system applicable to foreign language teaching is characterized in that: 所述适用于外语教学的交互式听说练习系统包括:The interactive listening and speaking practice system suitable for foreign language teaching includes: 用户终端,用于供用户操作以进行听说练习;A user terminal, which is used by the user to perform listening and speaking exercises; 应用服务器,用于与所述用户终端构成数据交互以为所述用户终端提供听说练习所需的数据;an application server, configured to form data interaction with the user terminal to provide the user terminal with data required for listening and speaking practice; 其中,所述用户终端包括:Wherein, the user terminal includes: 扬声器,用于输出语音信息;speaker for outputting voice information; 麦克风,用于采集语音信息;Microphone for collecting voice information; 处理器,用于处理所述麦克风采集的语音信息并通过所述扬声器输出语音信息;a processor for processing the voice information collected by the microphone and outputting the voice information through the speaker; 第一类通讯模块,用于使所述处理器能与外部构成通讯连接;The first type of communication module is used to enable the processor to form a communication connection with the outside; 所述应用服务器包括:The application server includes: 字段解析模块,用于分析语音信息并得出语句字段;The field parsing module is used to analyze the voice information and obtain the sentence field; 语义分析模块,用于根据所述语义分析结果输出对应该语句字段的反馈字段;a semantic analysis module for outputting a feedback field corresponding to the sentence field according to the semantic analysis result; 语句组合模块,用于根据所述反馈字段以及语法关系生成对应的反馈语句;a statement combination module, configured to generate corresponding feedback statements according to the feedback fields and grammatical relationships; 语音发生模块,用于根据反馈语句生成反馈语音;The voice generation module is used to generate feedback voice according to the feedback sentence; 第二类通讯模块,用于将所述反馈语音发送至所述用户终端。The second type of communication module is used for sending the feedback voice to the user terminal. 2.根据权利要求1所述的适用于外语教学的交互式听说练习系统,其特征在于:2. the interactive listening and speaking practice system that is applicable to foreign language teaching according to claim 1, is characterized in that: 所述应用服务器还包括:The application server also includes: 配对模块,用于将两个所述用户终端进行配对,以使它们语音信息进行在线交互。The pairing module is used for pairing the two user terminals, so that their voice information can be exchanged online. 3.根据权利要求2所述的适用于外语教学的交互式听说练习系统,其特征在于:3. the interactive listening and speaking practice system that is applicable to foreign language teaching according to claim 2, is characterized in that: 所述应用服务器通过所述配对模块将两个所述用户终端进行配对,配对后的所述用户终端的语音信息经过所述应用服务器处理后发送到配对的另一个所述用户终端。The application server pairs the two user terminals through the pairing module, and the voice information of the paired user terminal is processed by the application server and sent to another paired user terminal. 4.根据权利要求3所述的适用于外语教学的交互式听说练习系统,其特征在于:4. the interactive listening and speaking practice system that is applicable to foreign language teaching according to claim 3, is characterized in that: 所述语义分析模块包含一个反馈人工神经网络;所述反馈人工神经网络包含若干语义分析模型,所述语义分析模型通过将字段作为输入和输出进行训练。The semantic analysis module includes a feedback artificial neural network; the feedback artificial neural network includes a number of semantic analysis models trained by taking fields as input and output. 5.根据权利要求4所述的适用于外语教学的交互式听说练习系统,其特征在于:5. the interactive listening and speaking practice system that is applicable to foreign language teaching according to claim 4, is characterized in that: 所述反馈人工神经网络在分析所述语句字段时输出对应的反馈字段以及置信度。The feedback artificial neural network outputs a corresponding feedback field and a confidence level when analyzing the sentence field. 6.根据权利要求5所述的适用于外语教学的交互式听说练习系统,其特征在于:6. the interactive listening and speaking practice system that is applicable to foreign language teaching according to claim 5, is characterized in that: 所述应用服务器包括:The application server includes: 自适应模块,用于在所述反馈人工神经网络的输出的置信度低于预设值时自适应输出对答语句;an adaptive module, used for adaptively outputting a reply sentence when the confidence level of the output of the feedback artificial neural network is lower than a preset value; 所述自适应模块根据所述语句字段寻找数据库中存储最接近历史语句字段,然后对应输出对应该历史语句字段的反馈字段。The self-adaptive module searches for a historical sentence field stored in the database closest to the historical sentence field according to the sentence field, and then outputs a feedback field corresponding to the historical sentence field. 7.根据权利要求6所述的适用于外语教学的交互式听说练习系统,其特征在于:7. the interactive listening and speaking practice system that is applicable to foreign language teaching according to claim 6, is characterized in that: 所述用户终端包括:The user terminal includes: 显示模块,用于显示图像信息;Display module for displaying image information; 模拟模块,用于在所述显示模块生成一个用于与用户实现对话的虚拟人像。The simulation module is used for generating a virtual portrait in the display module for realizing dialogue with the user. 8.根据权利要求7所述的适用于外语教学的交互式听说练习系统,其特征在于:8. the interactive listening and speaking practice system that is applicable to foreign language teaching according to claim 7, is characterized in that: 所述用户终端还包括:The user terminal also includes: 摄像头,用于采集用户的人脸图像;The camera is used to collect the face image of the user; 所述应用服务器还包括:The application server also includes: 表情识别模块,用于根据所述摄像头采集的人脸图像生成所述用户表情数据;an expression recognition module, configured to generate the user expression data according to the face image collected by the camera; 所述表情识别模块将所述用户表情数据输入至所述语音分析模块作为其反馈人工神经网络的一个输入数据。The facial expression recognition module inputs the user facial expression data into the speech analysis module as an input data for its feedback to the artificial neural network. 9.根据权利要求8所述的适用于外语教学的交互式听说练习系统,其特征在于:9. the interactive listening and speaking practice system that is applicable to foreign language teaching according to claim 8, is characterized in that: 所述应用服务器还包括:The application server also includes: 唇形识别模块,用于根据所述摄像头采集的人脸图像识别用户唇形并生成唇形识别数据;A lip shape recognition module for recognizing the user's lip shape according to the face image collected by the camera and generating lip shape recognition data; 所述唇形识别模块将所述唇形识别数据输入至所述语音分析模块作为其反馈人工神经网络的一个输入数据。The lip shape recognition module inputs the lip shape recognition data to the speech analysis module as an input data for feeding back the artificial neural network. 10.根据权利要求9所述的适用于外语教学的交互式听说练习系统,其特征在于:10. the interactive listening and speaking practice system that is applicable to foreign language teaching according to claim 9, is characterized in that: 所述应用服务器还包括:The application server also includes: 数据分析模块,用于分析和汇总用户的语音信息模块被语义分析模块分析的情况;The data analysis module is used to analyze and summarize the situation that the user's voice information module is analyzed by the semantic analysis module; 所述数据分析模块将分析数据发送至所述用户终端并通过所述显示模块显示给用户。The data analysis module sends the analysis data to the user terminal and displays it to the user through the display module.
CN202011263720.9A 2020-11-12 2020-11-12 Interactive listening and speaking exercise system suitable for foreign language teaching Pending CN112309183A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011263720.9A CN112309183A (en) 2020-11-12 2020-11-12 Interactive listening and speaking exercise system suitable for foreign language teaching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011263720.9A CN112309183A (en) 2020-11-12 2020-11-12 Interactive listening and speaking exercise system suitable for foreign language teaching

Publications (1)

Publication Number Publication Date
CN112309183A true CN112309183A (en) 2021-02-02

Family

ID=74326707

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011263720.9A Pending CN112309183A (en) 2020-11-12 2020-11-12 Interactive listening and speaking exercise system suitable for foreign language teaching

Country Status (1)

Country Link
CN (1) CN112309183A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117275456A (en) * 2023-10-18 2023-12-22 南京龙垣信息科技有限公司 Intelligent listening and speaking training device supporting multiple languages

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW516009B (en) * 2001-09-28 2003-01-01 Inventec Corp On-line virtual community system for human oral foreign language pairing instruction and the method thereof
TW200516518A (en) * 2003-11-07 2005-05-16 Inventec Corp On-line life spoken language learning system combining local computer learning and remote training and method thereof
CN101551947A (en) * 2008-06-11 2009-10-07 俞凯 Computer system for assisting spoken language learning
CN106203490A (en) * 2016-06-30 2016-12-07 江苏大学 Based on attribute study and the image ONLINE RECOGNITION of interaction feedback, search method under a kind of Android platform
CN106875940A (en) * 2017-03-06 2017-06-20 吉林省盛创科技有限公司 A kind of Machine self-learning based on neutral net builds knowledge mapping training method
CN107578004A (en) * 2017-08-30 2018-01-12 苏州清睿教育科技股份有限公司 Learning method and system based on image recognition and interactive voice
CN110444087A (en) * 2019-07-26 2019-11-12 深圳市讯呼信息技术有限公司 A kind of intelligent language teaching machine device people
CN110853429A (en) * 2019-12-17 2020-02-28 陕西中医药大学 An intelligent English teaching system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW516009B (en) * 2001-09-28 2003-01-01 Inventec Corp On-line virtual community system for human oral foreign language pairing instruction and the method thereof
TW200516518A (en) * 2003-11-07 2005-05-16 Inventec Corp On-line life spoken language learning system combining local computer learning and remote training and method thereof
CN101551947A (en) * 2008-06-11 2009-10-07 俞凯 Computer system for assisting spoken language learning
CN106203490A (en) * 2016-06-30 2016-12-07 江苏大学 Based on attribute study and the image ONLINE RECOGNITION of interaction feedback, search method under a kind of Android platform
CN106875940A (en) * 2017-03-06 2017-06-20 吉林省盛创科技有限公司 A kind of Machine self-learning based on neutral net builds knowledge mapping training method
CN107578004A (en) * 2017-08-30 2018-01-12 苏州清睿教育科技股份有限公司 Learning method and system based on image recognition and interactive voice
CN110444087A (en) * 2019-07-26 2019-11-12 深圳市讯呼信息技术有限公司 A kind of intelligent language teaching machine device people
CN110853429A (en) * 2019-12-17 2020-02-28 陕西中医药大学 An intelligent English teaching system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117275456A (en) * 2023-10-18 2023-12-22 南京龙垣信息科技有限公司 Intelligent listening and speaking training device supporting multiple languages

Similar Documents

Publication Publication Date Title
CN107203953B (en) Teaching system based on internet, expression recognition and voice recognition and implementation method thereof
CN108000526B (en) Dialogue interaction method and system for intelligent robot
US20170330567A1 (en) Vocabulary generation system
JP6705956B1 (en) Education support system, method and program
US11183187B2 (en) Dialog method, dialog system, dialog apparatus and program that gives impression that dialog system understands content of dialog
JP6719741B2 (en) Dialogue method, dialogue device, and program
CN104795065A (en) Method for increasing speech recognition rate and electronic device
CN114821744B (en) Virtual character driving method, device and equipment based on expression recognition
CN117332072B (en) Dialogue processing, voice abstract extraction and target dialogue model training method
KR20210123545A (en) Method and apparatus for conversation service based on user feedback
CN119418725A (en) A multimodal classroom emotion recognition method and system based on modality adaptive learning
KR20200002141A (en) Providing Method Of Language Learning Contents Based On Image And System Thereof
CN111078010B (en) Man-machine interaction method and device, terminal equipment and readable storage medium
CN101739852B (en) Speech recognition-based method and device for realizing automatic oral interpretation training
CN112309183A (en) Interactive listening and speaking exercise system suitable for foreign language teaching
US20240202634A1 (en) Dialogue training device, dialogue training system, dialogue training method, and computer-readable medium
CN116741143B (en) Digital-body-based personalized AI business card interaction method and related components
KR102413860B1 (en) Voice agent system and method for generating responses based on user context
JP7418106B2 (en) Information processing device, information processing method and program
CN118014084A (en) Multi-modal interaction method based on large language model
CN116226411B (en) Interactive information processing method and device for interactive project based on animation
KR102604277B1 (en) Complex sentiment analysis method using speaker separation STT of multi-party call and system for executing the same
CN118170882A (en) Virtual character-based dialogue method, device, equipment and storage medium
CN111897434A (en) System, method, and medium for signal control of virtual portrait
CN118197111A (en) Language online learning method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination