[go: up one dir, main page]

CN108984229B - Application program starting control method and family education equipment - Google Patents

Application program starting control method and family education equipment Download PDF

Info

Publication number
CN108984229B
CN108984229B CN201810816758.0A CN201810816758A CN108984229B CN 108984229 B CN108984229 B CN 108984229B CN 201810816758 A CN201810816758 A CN 201810816758A CN 108984229 B CN108984229 B CN 108984229B
Authority
CN
China
Prior art keywords
voice
target
tutoring
voice signal
student
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810816758.0A
Other languages
Chinese (zh)
Other versions
CN108984229A (en
Inventor
杨昊民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201810816758.0A priority Critical patent/CN108984229B/en
Publication of CN108984229A publication Critical patent/CN108984229A/en
Application granted granted Critical
Publication of CN108984229B publication Critical patent/CN108984229B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种应用程序的启动控制方法及家教设备,该方法包括:侦听外界发出的语音信号;检测语音信号是否包含用于开启家教设备的某一应用程序的关键词;若包含,识别出发出语音信号的外界用户的用户性别;获取外界用户的即时情绪;从该应用程序预配置的若干种视觉风格中确定出用于改善该即时情绪的目标视觉风格;开启该应用程序并按照目标视觉风格加载该应用程序的主界面,并在该应用程序的主界面上输出与用户性别相匹配的虚拟动物;控制虚拟动物按照用于改善该即时情绪的语调播报提示信息,提示信息用于提示已开启该应用程序。能够根据用户的即时情绪灵活调整应用程序的主界面的视觉风格,提高趣味性,有利于提升用户使用家教设备的应用程序的兴趣。

Figure 201810816758

A startup control method for an application program and a tutoring device, the method comprising: listening to a voice signal sent by the outside world; detecting whether the voice signal contains a keyword used to start a certain application program of the tutoring device; Signal the user gender of the external user; obtain the immediate mood of the external user; determine the target visual style for improving the instant mood from several visual styles pre-configured by the application; open the application and load according to the target visual style The main interface of the application program, and output the virtual animal matching the gender of the user on the main interface of the application program; control the virtual animal to broadcast prompt information according to the tone used to improve the instant mood, and the prompt information is used to prompt that the application. The visual style of the main interface of the application program can be flexibly adjusted according to the user's real-time emotion, so as to improve the interest, and help increase the user's interest in using the application program of the tutoring device.

Figure 201810816758

Description

Application program starting control method and family education equipment
Technical Field
The application relates to the technical field of family education equipment, in particular to a starting control method of an application program and the family education equipment.
Background
At present, more and more students and pupils use family education equipment (such as family education machines) to assist learning. In general, a family education device has various applications, such as a question searching application, an entertainment application, and the like, installed therein. In the practical application of the family education equipment, the visual style of the main interface of the application program of the family education equipment is fixed, the interestingness is poor, and the interest of the application program of the family education equipment for primary and secondary school students is not improved.
Disclosure of Invention
The embodiment of the application discloses a starting control method of an application program and family education equipment, the visual style of a main interface of the application program can be flexibly adjusted according to the instant emotion of a user, interestingness is improved, and interest of the user (such as primary and secondary school students) in using the application program of the family education equipment is favorably improved.
A first aspect of an embodiment of the present application discloses a method for controlling starting of an application program, where the method includes:
the family education equipment monitors voice signals sent by the outside;
the family education equipment detects whether the voice signal contains a keyword for starting a certain application program of the family education equipment;
if the voice signal contains a keyword for starting a certain application program of the family education equipment, the family education equipment identifies the user gender of the external user starting the voice signal according to the voice signal;
the family education equipment acquires the instant emotion of the outside user;
the family education device determines a target visual style for improving the instant emotion from a plurality of visual styles pre-configured by the application program;
the family education equipment starts the application program, loads a main interface of the application program according to the target visual style, and outputs a virtual animal matched with the gender of the user on the main interface of the application program;
and the family education equipment controls the virtual animal to broadcast prompt information according to the tone used for improving the instant emotion, wherein the prompt information is used for prompting that the application program is started.
As an optional implementation manner, in the first aspect of this embodiment of the present application, the identifying, by the family education device, the user gender of the external user who has sent out the speech signal according to the speech signal includes:
the family education equipment extracts the sound characteristics of the voice signals according to the voice signals;
and the family education equipment identifies the user gender of the external user starting the voice signal according to the voice characteristics of the voice signal.
As an optional implementation manner, in the first aspect of this embodiment of the present application, the acquiring, by the family education device, the instant emotion of the external user includes:
the family education equipment identifies the instant emotion of the external user according to the tone of the voice signal;
or, the family education device obtains the instant emotion of the external user, including:
the family education equipment detects whether a wearable device is wirelessly connected at present;
if the wearable equipment is wirelessly connected with the family education equipment currently, the family education equipment acquires the sound characteristics of a wearer of the wearable equipment;
the family education device verifying whether the sound features of the wearer of the wearable device match the sound features of the voice signal;
if the current heart rate data of the wearer of the wearable device and the current blood pressure data of the wearer are matched, the family education device informs the wearable device to obtain the current heart rate data of the wearer of the wearable device and the current blood pressure data of the wearer, and the wearable device determines the instant emotion of the wearer according to the current heart rate data of the wearer and the current blood pressure data of the wearer;
the family education device receives the identification of the instant emotion of the wearer sent by the wearable device so as to obtain the instant emotion of the outside user.
As an optional implementation manner, in the first aspect of this embodiment of the present application, the application is a topic search application, and the method further includes:
the family education equipment monitors the search question voice sent by an external user;
the family education equipment acquires a question searching result corresponding to the question searching voice;
the family education equipment outputs a question searching result corresponding to the question searching voice to a main interface of the application program for displaying;
and the family education equipment controls the virtual animal to broadcast response voice corresponding to the search question voice sent by the external user according to the tone used for improving the instant emotion.
As an optional implementation manner, in the first aspect of this embodiment of the present application, the method further includes:
the family education equipment inquires whether student personal attribute information corresponding to the sound features of the voice signals is stored or not; the student personal attribute information at least comprises student identity information and a target curriculum schedule corresponding to the students, and the target curriculum schedule at least comprises a plurality of different learning subjects learned by the students and teacher terminal identifications corresponding to the learning subjects;
if the personal attribute information of the students corresponding to the voice features of the voice signals is stored, the family education equipment queries a target learning subject corresponding to the search result from the target curriculum schedule according to the search result; inquiring a target teacher terminal identification corresponding to the target learning subject from the target curriculum schedule;
the family education equipment converts the search question voice into search question words and reports the student identity information, the search question words and the target teacher terminal identification to a cloud platform, so that the cloud platform sends the student identity information and the search question words to a teacher terminal to which the target teacher terminal identification belongs according to the target teacher terminal identification.
The second aspect of the embodiments of this application discloses a family education equipment, includes:
the monitoring unit is used for monitoring voice signals sent by the outside;
the detection unit is used for detecting whether the voice signal contains a keyword for starting a certain application program of the family education equipment;
the recognition unit is used for recognizing the user gender of the external user who starts the voice signal according to the voice signal when the detection result of the detection unit is positive;
the first acquisition unit is used for acquiring the instant emotion of the external user;
a determining unit, configured to determine a target visual style for improving the instant emotion from a plurality of visual styles preconfigured by the application;
the first control unit is used for starting the application program, loading a main interface of the application program according to the target visual style, and outputting a virtual animal matched with the gender of the user on the main interface of the application program;
and the second control unit is used for controlling the virtual animal to broadcast prompt information according to the tone used for improving the instant emotion, wherein the prompt information is used for prompting that the application program is started.
As an optional implementation manner, in the second aspect of the embodiment of the present application, the identification unit is specifically configured to, when the detection result of the detection unit is yes, extract a sound feature of the speech signal according to the speech signal; and identifying the user gender of the external user starting the voice signal according to the voice characteristics of the voice signal.
As an optional implementation manner, in a second aspect of the embodiment of the present application, the first obtaining unit is specifically configured to recognize an instant emotion of the external user according to a tone of the voice signal;
alternatively, the first acquiring unit includes:
the detection subunit is used for detecting whether the family education equipment is wirelessly connected with wearable equipment currently;
the acquisition subunit is used for acquiring the sound characteristics of the wearer of the wearable device when the detection subunit detects that the family education device is wirelessly connected with the wearable device currently;
a verification subunit for verifying whether the sound features of the wearer of the wearable device match the sound features of the speech signal;
the interaction subunit is used for informing the wearable device to acquire the current heart rate data of the wearer of the wearable device and the current blood pressure data of the wearer when the verification result of the verification subunit is matched, and determining the instant emotion of the wearer by the wearable device according to the current heart rate data of the wearer and the current blood pressure data of the wearer;
the interaction subunit is further configured to receive the identifier of the instant emotion of the wearer sent by the wearable device, so as to obtain the instant emotion of the external user.
As an optional implementation manner, in the second aspect of this embodiment of the present application, the application is a topic search application:
the monitoring unit is also used for monitoring the question searching voice sent by the external user;
the family education device further includes a second acquisition unit, wherein:
the second obtaining unit is used for obtaining a question searching result corresponding to the question searching voice;
the first control unit is further configured to output a question searching result corresponding to the question searching voice to a main interface of the application program for display;
and the second control unit is also used for controlling the virtual animal to broadcast response voice corresponding to the search question voice sent by the external user according to the tone used for improving the instant emotion.
As an optional implementation manner, in the second aspect of this embodiment of the present application, the family education device further includes:
the query unit is used for querying whether student personal attribute information corresponding to the sound features of the voice signals is stored or not; the student personal attribute information at least comprises student identity information and a target curriculum schedule corresponding to the students, and the target curriculum schedule at least comprises a plurality of different learning subjects learned by the students and teacher terminal identifications corresponding to the learning subjects; and if the personal attribute information of the students corresponding to the voice characteristics of the voice signals is stored, inquiring a target learning subject corresponding to the search result from the target curriculum schedule by taking the search result as a basis; inquiring a target teacher terminal identification corresponding to the target learning subject from the target curriculum schedule;
and the conversion unit is used for converting the search question voice into search question words and reporting the student identity information, the search question words and the target teacher terminal identification to a cloud platform, so that the cloud platform sends the student identity information and the search question words to a teacher terminal to which the target teacher terminal identification belongs according to the target teacher terminal identification.
The third aspect of the embodiments of the present application discloses a family education device, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the start control method of the application program disclosed in the first aspect of the embodiment of the present application.
A fourth aspect of the embodiments of the present application discloses a computer-readable storage medium, which stores a computer program, where the computer program causes a computer to execute the method for controlling the start of an application program disclosed in the first aspect of the embodiments of the present application.
Compared with the prior art, the embodiment of the application has the following beneficial effects:
by implementing the embodiment of the application, the home education equipment can flexibly adjust the visual style of the main interface of the application program started by the user according to the instant emotion of the user, so that the interestingness can be improved, and the interest of the user (such as primary and secondary school students) in using the application program of the home education equipment can be promoted. In addition, the visual style of the main interface of the adjusted application program is a target visual style for improving the instant emotion of the user, so that the experience of the user using the application program of the family education device can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for controlling starting of an application according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another method for controlling the start of an application according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of a family education device disclosed in an embodiment of the present application;
FIG. 4 is a schematic structural diagram of another family education device disclosed in the embodiments of the present application;
fig. 5 is a schematic structural diagram of another family education device disclosed in the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the examples and figures of the present application are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the application discloses a starting control method of an application program and family education equipment, the visual style of a main interface of the application program can be flexibly adjusted according to the instant emotion of a user, interestingness is improved, and interest of the user (such as primary and secondary school students) in using the application program of the family education equipment is favorably improved. The following are detailed below.
Example one
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a method for controlling starting of an application according to an embodiment of the present application. As shown in fig. 1, the start control method of the application program may include the steps of:
101. the family education equipment listens to the voice signal sent by the outside.
In this application embodiment, family education equipment can start the pronunciation interception function of family education equipment after the start to family education equipment can be through the real-time voice signal that the interception external world sent of the pronunciation interception function of the family education equipment that has started.
Optionally, in this embodiment of the application, after the home teaching device is turned on, it may be detected whether a first trajectory for starting the voice interception function, which is input by an external user on a display screen of the home teaching device, is received, and if the first trajectory is received, the voice interception function of the home teaching device is started, so that power consumption of the home teaching device may be reduced; correspondingly, after the family education equipment is started, whether a second track which is input by an external user on a display screen of the family education equipment and used for closing the voice interception function is received or not can be detected, wherein the second track is different from the first track; if the audio signal is received, the voice interception function of the family education device can be closed, so that the power consumption of the family education device can be reduced.
102. The family education equipment detects whether the voice signal contains a keyword for starting a certain application program of the family education equipment; if the voice signal contains a keyword for starting an application program of the family education device, executing step 103-step 107; if the voice signal does not contain the keyword for starting a certain application program of the family education equipment, the process is ended.
For example, the family education device may detect whether the voice signal contains a keyword "small cloth" for starting a search question application of the family education device, and if so, execute steps 103 to 107; if not, the process is ended.
103. And the family education equipment identifies the user gender of the external user sending the voice signal according to the voice signal.
In the embodiment of the application, the family education equipment can extract the sound characteristics of the voice signal according to the voice signal; and the family education equipment can identify the user gender of the external user who sends the voice signal according to the voice characteristics of the voice signal.
For example, the family education device can extract a tone (belonging to a sound feature) of the voice signal according to the voice signal, and the family education device can recognize the user gender of the external user who utters the voice signal according to the tone of the voice signal. The male vocal cords are longer, wider and thicker, so that the frequency is low during vibration, and the emitted tone is low; the vocal cords of women are shorter, thinner and narrower, so that the frequency is high during vibration, and the emitted tones are high; therefore, if the tone of the voice signal is low, the family education device can determine that the gender of the external user of the voice signal is male; if the tone of the voice signal is high, the family education device may determine that the gender of the external user of the voice signal is female.
104. And the family education equipment acquires the instant emotion of the external user.
As an optional implementation manner, in the embodiment of the present application, the family education device may recognize an instant emotion of an external user according to a tone of the voice signal; the external users' immediate emotions may include a low emotion and a high emotion, among others. For example, when the intonation of the voice signal is a high-hole intonation, the family education device can recognize that the instant emotion of the external user is a high-rise emotion; when the intonation of the voice signal is a deep intonation, the family education device can recognize that the instant emotion of the external user is a low emotion.
As another optional implementation manner, in this application example, the acquiring, by the family education device, the instant emotion of the external user includes:
the family education equipment detects whether a wearable device is wirelessly connected at present;
if the wearable equipment is wirelessly connected with the family education equipment currently, the family education equipment acquires the sound characteristics of a wearer of the wearable equipment;
and the family education device verifying whether the sound features of the wearer of the wearable device match the sound features of the voice signal;
if the current heart rate data of the wearable device and the current blood pressure data of the wearer are matched, the family education device can inform the wearable device to acquire the current heart rate data of the wearer of the wearable device and the current blood pressure data of the wearer, and the wearable device determines the instant emotion of the wearer according to the current heart rate data of the wearer and the current blood pressure data of the wearer; when the current heart rate data exceeds a specified heart rate threshold value and the current blood pressure data exceeds a specified blood pressure threshold value, the wearable device identifies that the instant emotion type of the wearer is a high emotion; or when the current heart rate data does not exceed a specified heart rate threshold and the current blood pressure data does not exceed a specified blood pressure threshold, identifying the type of the immediate emotion of the wearer as a low emotion;
and the family education device receives the identification of the instant emotion of the wearer sent by the wearable device to obtain the instant emotion of the external user.
Wherein, implement above-mentioned embodiment, can be because the accurate instant emotion of discerning external user of the wearable equipment that external user wore and inform family education equipment to make family education equipment can obtain accurate external user's instant emotion.
105. The family education device determines a target visual style for improving the instant emotion from among a plurality of visual styles pre-configured by the application.
Wherein the visual style may be a combination of color and pattern. For example, when the instant emotion is a low-falling emotion, the color of the target visual style may be red (in color psychology, red represents vitality, health, enthusiasm, joy, celebration, and the like), and the pattern of the target visual style may be a festive pattern; for another example, when the immediate mood is a high mood, the color of the target visual style can be brown or brown (in psychology, brown or brown stands for calm, and peace), and the pattern of the target visual style can be provided with a pattern of cool and quiet cues.
106. The family education equipment starts the application program, loads the main interface of the application program according to the target visual style, and outputs the virtual animal matched with the gender of the user on the main interface of the application program.
For example, the virtual animal matching the gender of the user may be a virtual animal that is generally preferred by the user to whom the gender of the user belongs. For example, when the gender of the user is male, the virtual animal matched with the gender of the user can be a virtual dog, a virtual tortoise, and the like; when the gender of the user is female or male, the virtual animal matched with the gender of the user can be a virtual cat, a virtual bird and the like.
It should be noted that the virtual animal matching the gender of the user may also be any virtual animal in a set of virtual animals configured for the gender of the user in advance by the family education device, and the embodiment of the present application is not limited thereto.
107. And the family education equipment controls the virtual animal to broadcast prompt information according to the tone used for improving the instant emotion, and the prompt information is used for prompting that the application program is started.
In the embodiment of the application, for example, when the instant emotion is a low-falling emotion, the virtual animal may broadcast the prompt message according to a happy tone for improving the low-falling emotion; for another example, when the immediate emotion is a rising emotion, the virtual animal may broadcast the alert information in the tone and expression for improving the rising emotion.
In the method described in fig. 1, the home education device can flexibly adjust the visual style of the main interface of the application program started by the user according to the instant emotion of the user, so that the interestingness can be improved, and the interest of the user (such as primary and secondary school students) in using the application program of the home education device can be promoted. In addition, the visual style of the main interface of the adjusted application program is a target visual style for improving the instant emotion of the user, so that the experience of the user using the application program of the family education device can be improved.
Example two
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating another method for controlling starting of an application according to an embodiment of the present disclosure. As shown in fig. 2, the start control method of the application program may include the steps of:
201. the family education equipment listens to the voice signal sent by the outside.
202. Detecting whether the voice signal contains a keyword for starting a search question application program of the family education equipment or not by the family education equipment; if the voice signal contains the keyword for starting the question searching application program of the family education device, executing step 203-step 212; if the voice signal does not contain the keyword for starting the question searching application program of the family education equipment, the process is ended.
203. And the family education equipment identifies the user gender of the external user sending the voice signal according to the voice signal.
204. The family education equipment detects whether wireless connection is currently provided with the wearable equipment, if yes, the sound characteristics of a wearer of the wearable equipment are obtained, whether the sound characteristics of the wearer of the wearable equipment are matched with the sound characteristics of the voice signals is verified, if yes, the wearable equipment is informed to obtain the current heart rate data of the wearer of the wearable equipment and the current blood pressure data of the wearer, and the wearable equipment determines the instant emotion of the wearer according to the current heart rate data of the wearer and the current blood pressure data of the wearer.
205. The family education device receives the identification of the instant emotion of the wearer sent by the wearable device to obtain the instant emotion of the outside user.
206. The family education device determines a target visual style for improving the instant emotion from among a plurality of visual styles pre-configured by the application.
207. The family education equipment starts the application program, loads the main interface of the application program according to the target visual style, and outputs the virtual animal matched with the gender of the user on the main interface of the application program.
For example, the virtual animal matching the gender of the user may be a virtual animal that is generally preferred by the user to whom the gender of the user belongs. For example, when the gender of the user is male, the virtual animal matched with the gender of the user can be a virtual dog, a virtual tortoise, and the like; when the gender of the user is female or male, the virtual animal matched with the gender of the user can be a virtual cat, a virtual bird and the like.
It should be noted that the virtual animal matching the gender of the user may also be any virtual animal in a set of virtual animals configured for the gender of the user in advance by the family education device, and the embodiment of the present application is not limited thereto.
208 the family education device controls the virtual animal to broadcast prompt information according to the tone used for improving the instant emotion, wherein the prompt information is used for prompting that the application program is started.
In the embodiment of the application, for example, when the instant emotion is a low-falling emotion, the virtual animal may broadcast the prompt message according to a happy tone for improving the low-falling emotion; for another example, when the immediate emotion is a rising emotion, the virtual animal may broadcast the alert information in the tone and expression for improving the rising emotion.
209. The family education equipment monitors the question searching voice sent by the external user and obtains a question searching result corresponding to the question searching voice.
210. And the family education equipment outputs the question searching result corresponding to the question searching voice to a main interface of the application program for displaying.
211. And the family education equipment controls the virtual animal to broadcast response voice corresponding to the search question voice sent by the external user according to the tone for improving the instant emotion.
In this embodiment of the application, the response voice may be an output voice obtained by performing text/voice conversion on a search question result corresponding to a search question voice sent by the external user, for example, when the search question voice sent by the external user is "what a radical wearing characters is", the search question result corresponding to the search question voice sent by the external user may be "the radical wearing characters is: correspondingly, the response voice can be a search question result 'the radical with characters worn on the head' corresponding to the search question voice sent by the external user: and tenthly, performing text/voice conversion to obtain output voice. For another example, when the search question speech uttered by the external user is "recommend good sentences describing spring and i want to write composition", the search question result corresponding to the search question speech uttered by the external user may be "thin rain such as silk, spring rain such as oil", and accordingly, the response speech may be output speech obtained by performing text/speech conversion on the search question result "thin rain such as silk, spring rain such as oil" corresponding to the search question speech uttered by the external user.
In this embodiment of the application, the response voice may be a learning encouragement voice of a search result (e.g., a knowledge point video) corresponding to the search voice uttered by the external user, for example, when the search voice uttered by the external user is "i want to watch videos about rational numbers", the search result corresponding to the search voice uttered by the external user may be videos about rational numbers, and accordingly, the response voice may be a learning encouragement voice of videos about rational numbers corresponding to the search voice uttered by the external user, for example, the learning encouragement voice may be "to assist a celebrity in finding videos about rational numbers, and make a learning effort".
212. The family education equipment inquires whether student personal attribute information corresponding to the sound features of the voice signals is stored or not; the student personal attribute information at least comprises student identity information and a target curriculum schedule corresponding to the student, and the target curriculum schedule at least comprises a plurality of different learning subjects learned by the student and teacher terminal identifications corresponding to each learning subject; if the personal attribute information of the student corresponding to the sound feature of the voice signal is stored, go to step 213; if the personal attribute information of the student corresponding to the sound feature of the voice signal is not stored, the flow is ended.
The teacher terminal identification can be a mobile phone number of the teacher terminal or account information of a teaching application installed on the teacher terminal.
213. The family education equipment queries a target learning subject corresponding to the search question result from a target curriculum schedule according to the search question result; and inquiring the target teacher terminal identification corresponding to the target learning subject from the target curriculum schedule.
214. The home education equipment converts the search question voice into search question words, and reports the student identity information, the search question words and the target teacher terminal identification to the cloud platform, so that the cloud platform sends the student identity information and the search question words to the teacher terminal to which the target teacher terminal identification belongs according to the target teacher terminal identification.
In the embodiment of the present application, the above steps 212 to 214 are implemented to enable the teacher to find the problems encountered by some students in the subject (e.g., language) taught by the teacher according to the collected search words sent by some students in the subject (e.g., language) taught by the teacher, so that the teacher can specifically explain the problems encountered by the students to the students, thereby facilitating to improve the learning efficiency of the students.
As an optional implementation manner, in this application embodiment, after receiving the student identity information and the search question words sent by the cloud platform, the teacher terminal to which the target teacher terminal identifier belongs may prompt the teacher to award the student to which the student identity information belongs for encouraging the student to learn a virtual gift (such as a virtual small red flower), and the teacher terminal to which the target teacher terminal identifier belongs may send the virtual gift to the cloud platform, so that the cloud platform may send the virtual gift to the home education device, thereby realizing that the teacher encourages the student to learn, and being beneficial to promoting learning interest and power of the student.
As an optional implementation manner, in this application embodiment, the cloud platform may detect whether the total number of the virtual gifts pushed to the family education device exceeds a specified number, and if the total number of the virtual gifts pushed to the family education device exceeds the specified number, the cloud platform may determine a user function (for example, an animation function performed by watching a virtual animal) in the search result application program that has not been activated corresponding to the total number of the virtual gifts pushed to the family education device, and activate the user authority for the family education device, so that the available user functions of the search result application program may be enriched, and the learning interest and power of the student may be improved.
As an optional implementation manner, in this embodiment of the application, the personal attribute information of the student may further include a monitor terminal identifier corresponding to the student. Accordingly, the method depicted in fig. 2 may further include:
family education equipment is with student identity information, search for the subject characters and this guardian terminal identification and report to the cloud platform, so that the cloud platform is according to this guardian terminal identification, with student identity information, search for the subject characters and send the guardian terminal that this guardian terminal identification belongs to, make the guardian can be according to the search for the subject characters that the student of its guardianship who collects sent on the study, discover the problem that this student met on the study, so that the guardian can the pertinence further explain the problem that this student met to the student of its guardianship, thereby be favorable to promoting this student's learning efficiency.
As an alternative embodiment, in the method described in fig. 2, the family education device may further perform the following operations:
the family education equipment informs the wearable equipment to start a recording microphone of the wearable equipment to monitor the environmental sound source, the wearable equipment can verify whether the monitored environmental sound source is matched with a crying sound source of the child in the database, if so, the wearable equipment plays target music, and the target music is used for attracting and transferring the attention of the student and relieving the emotion of the student when the student encounters difficulty in learning.
Because the environment sound source monitored by the recording microphone contains background noise, the wearable equipment can firstly carry out pre-discrete sampling and quantification on the environment sound source containing the background noise to obtain a data frame, construct a wavelet neural network based on a Morlet wavelet function for the data frame, construct a particle swarm fitness function for parameters of the wavelet neural network, obtain the optimal parameters of the wavelet neural network through a particle swarm algorithm, input the data frame into the wavelet neural network for filtering, thereby removing the noise and extracting to obtain a voice signal; further, the wearable device can check whether the voiceprint features of the extracted voice signals are matched with the voiceprint features of the crying sound sources of the children in the database, if the voiceprint features of the extracted voice signals are matched with the voiceprint features of the crying sound sources of the children in the database, the wearable device plays target music, the target music is used for attracting and transferring the attention of the students, and the emotion of the students when the students encounter difficulty in learning is relieved. By implementing the embodiment, the adaptability to the noise characteristics of different environment sound sources can be improved.
Wherein, wearable equipment check-up whether the voiceprint characteristic of the speech signal who extracts matches with the voiceprint characteristic of the child's source of crying in the database includes:
the wearable device carries out preprocessing on the extracted voice signals, wherein the preprocessing comprises pre-emphasis, framing and windowing processing;
the wearable device extracts voiceprint features MFCC, LPCC, Δ MFCC, Δ LPCC, energy, first order difference of energy, and GFCC from the preprocessed voice signal to jointly form a first multi-dimensional feature vector, wherein: MFCC is a Mel frequency cepstrum coefficient, LPCC is a linear prediction cepstrum coefficient, Δ MFCC is a first order difference of MFCC, Δ LPCC is a first order difference of LPCC, and GFCC is a Gamma tone filter cepstrum coefficient;
and the wearable equipment judges whether the first multi-dimensional feature vector is completely matched with a second multi-dimensional vector corresponding to the voiceprint features of the crying sound source of the child in the database, and if so, the extracted voiceprint features of the voice signal are accurately verified to be matched with the voiceprint features of the crying sound source of the child in the database.
In the embodiment of the application, the target music can be music preset by the wearable device and used for attracting the attention of the student and relieving the emotion of the student when the student encounters difficulty in learning; or, the wearable device may also be music that is acquired from the cloud and used for attracting and transferring the attention of the student, and relieving the emotion generated when the student encounters difficulty in learning, which is not limited in the embodiment of the present application.
In the method described in fig. 2, the home education device can flexibly adjust the visual style of the main interface of the application program started by the user according to the instant emotion of the user, so that the interestingness can be improved, and the interest of the user (such as primary and secondary school students) in using the application program of the home education device can be promoted. In addition, the visual style of the main interface of the adjusted application program is a target visual style for improving the instant emotion of the user, so that the experience of the user using the application program of the family education device can be improved.
In addition, in the method described in fig. 2, the family education device may load a virtual animal matching the user gender of the external user on the main interface of the application program, so that the virtual animal may broadcast a response voice corresponding to the search voice uttered by the external user according to the intonation for improving the instant emotion of the external user, thereby being beneficial to arousing the interest of the primary and secondary school students in searching for the questions.
In addition, in the method described in fig. 2, the teacher may find the problems that some students have encountered in the subject (e.g., language) taught by the teacher according to the collected search words that some students have issued in the subject (e.g., language) taught by the teacher, so that the teacher may specifically explain the problems that the students have encountered to the students, thereby facilitating to improve the learning efficiency of the students.
In addition, in the method described in fig. 2, the emotion of the student when difficulty is encountered in learning can be relieved.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic structural diagram of a family education device disclosed in an embodiment of the present application. As shown in fig. 3, the family education device may include:
the monitoring unit 301 is configured to monitor a voice signal sent by the outside;
a detecting unit 302, configured to detect whether the voice signal contains a keyword for starting an application of a family education device;
a recognition unit 303, configured to recognize, according to the voice signal, a user gender of an external user who has sent the voice signal when a detection result of the detection unit 302 is yes;
a first obtaining unit 304, configured to obtain an instant emotion of an external user;
a determining unit 305, configured to determine a target visual style for improving the instant emotion from a plurality of visual styles pre-configured by the application;
the first control unit 306 is configured to start the application program, load a main interface of the application program according to a target visual style, and output a virtual animal matched with the gender of the user on the main interface of the application program;
and a second control unit 307, configured to control the virtual animal to broadcast a prompt message according to the intonation for improving the instant emotion, where the prompt message is used to prompt that the application has been started.
In this application embodiment, family education equipment can start the voice interception function of family education equipment after the start to interception unit 301 can intercept the speech signal that the external world sent through the voice interception function real-time interception of the family education equipment that has started.
Optionally, in this embodiment of the application, after the home teaching device is turned on, it may be detected whether a first trajectory for starting the voice interception function, which is input by an external user on a display screen of the home teaching device, is received, and if the first trajectory is received, the voice interception function of the home teaching device is started, so that power consumption of the home teaching device may be reduced; correspondingly, after the family education equipment is started, whether the second track which is input by an external user on the display screen of the family education equipment and used for closing the voice interception function can be detected, and if the second track is received, the voice interception function of the family education equipment can be closed, so that the power consumption of the family education equipment can be reduced.
In this embodiment, the recognition unit 303 is specifically configured to, when the detection unit 302 detects that the voice signal includes a wake-up word for waking up a voice search application of the family education device, extract a sound feature of the voice signal according to the voice signal; and identifying the user gender of the external user sending the voice signal according to the voice characteristics of the voice signal. For example, the recognition unit 303 may extract a tone (belonging to a sound feature) of the voice signal according to the voice signal, and the recognition unit 303 may recognize the user gender of the external user who utters the voice signal according to the tone of the voice signal.
As an optional implementation manner, in this embodiment of the application, the first obtaining unit 304 may recognize an instant emotion of an external user according to a tone of the voice signal; the instant emotion types of the external users can include low emotions and high emotions. For example, when the intonation of the voice signal is a high pitch, the first obtaining unit 304 may recognize that the instant emotion of the external user is a high emotion; when the intonation of the voice signal is a deep intonation, the first obtaining unit 304 may recognize the immediate emotion of the external user as a low emotion.
As another alternative implementation, as shown in fig. 3, the first obtaining unit 304 may include:
a detecting subunit 3041, configured to detect whether a wearable device is currently wirelessly connected to the home education device;
an obtaining subunit 3042, configured to obtain a sound feature of a wearer of the wearable device when the detecting subunit 3041 detects that the wearable device is wirelessly connected to the home education device currently;
a verification subunit 3043 configured to verify whether the sound characteristics of the wearer of the wearable device match the sound characteristics of the voice signal;
an interaction subunit 3044, configured to notify the wearable device to obtain current heart rate data of a wearer of the wearable device and current blood pressure data of the wearer when the verification result of the verification subunit 3043 is a match, and determine an instant emotion type of the wearer by the wearable device according to the current heart rate data of the wearer and the current blood pressure data of the wearer; when the current heart rate data exceeds a specified heart rate threshold value and the current blood pressure data exceeds a specified blood pressure threshold value, the wearable device identifies the instant emotion of the wearer as a high emotion; or, when the current heart rate data does not exceed the specified heart rate threshold and the current blood pressure data does not exceed the specified blood pressure threshold, identifying the immediate emotion of the wearer as a low emotion;
and the interaction subunit 3044 is further configured to receive the identification of the wearer's instant emotion sent by the wearable device to obtain the external user's instant emotion.
Wherein, implement above-mentioned embodiment, can be because the accurate instant emotion of discerning external user of the wearable equipment that external user wore and inform family education equipment to make family education equipment can obtain accurate external user's instant emotion type.
Therefore, the home education device described in fig. 3 can flexibly adjust the visual style of the main interface of the application program started by the user according to the instant emotion of the user, so that the interestingness can be improved, and the interest of the user (such as primary and secondary school students) in using the application program of the home education device can be promoted. In addition, the visual style of the main interface of the adjusted application program is a target visual style for improving the instant emotion of the user, so that the experience of the user using the application program of the family education device can be improved.
Example four
Referring to fig. 4, fig. 4 is a schematic structural diagram of another family education device disclosed in the embodiment of the present application. Wherein, the family education device shown in fig. 4 is optimized by the family education device shown in fig. 3. In the family education device shown in fig. 4, the application is a question searching application, and the family education device shown in fig. 4 includes a second obtaining unit 308, a query unit 309 and a conversion unit 310, in addition to all the components of the family education device shown in fig. 3, wherein:
the monitoring unit 301 is further configured to monitor a question searching voice sent by an external user;
a second obtaining unit 308, configured to obtain a question searching result corresponding to the question searching voice;
the first control unit 306 is further configured to output a question searching result corresponding to the question searching voice to a main interface of the application program for display;
the second control unit 307 is further configured to control the virtual animal to broadcast a response voice corresponding to the search voice sent by the external user according to the intonation used for improving the instant emotion.
The query unit 309 is configured to query whether student personal attribute information corresponding to the sound feature of the voice signal is stored; the student personal attribute information at least comprises student identity information and a target curriculum schedule corresponding to the student, and the target curriculum schedule at least comprises a plurality of different learning subjects learned by the student and teacher terminal identifications corresponding to the learning subjects; and if the personal attribute information of the student corresponding to the voice feature of the voice signal is stored, inquiring a target learning subject corresponding to the search result from the target curriculum schedule by taking the search result as a basis; inquiring a target teacher terminal identification corresponding to the target learning subject from the target curriculum schedule;
the conversion unit 310 is configured to convert the search question voice into search question words, and report the student identity information, the search question words, and the target teacher terminal identifier to the cloud platform, so that the cloud platform sends the student identity information and the search question words to the teacher terminal to which the target teacher terminal identifier belongs according to the target teacher terminal identifier, and the teacher can find problems that the students encounter in subjects (such as languages) taught by the teacher according to the collected search question words that are sent by some students in the subjects (such as languages) taught by the teacher, so that the teacher can specifically explain the problems that the students encounter to the students, and thus the learning efficiency of the students is improved.
As an optional implementation manner, in this embodiment of the application, the personal attribute information of the student may further include a monitor terminal identifier corresponding to the student. Correspondingly, the conversion unit 310 can also report the student identity information, the search question words and the guardian terminal identification to the cloud platform, so that the cloud platform sends the student identity information and the search question words to the guardian terminal to which the guardian terminal identification belongs according to the guardian terminal identification, so that the guardian can discover the problems encountered by the student in the study according to the collected search question words sent by the guardian in the study of the student in the guardian, so that the guardian can pertinently further explain the problems encountered by the student to the guardian, and the learning efficiency of the student is favorably improved.
As an optional implementation manner, in this embodiment of the application, the second control unit 307 may further perform the following operations:
the second control unit 307 informs the wearable device to start the recording microphone of the wearable device to monitor the environmental sound source, and the wearable device may check whether the monitored environmental sound source matches with the crying sound source of the child in the database, and if so, the wearable device plays the target music, and the target music is used for attracting the attention of the student, and relieving the emotion of the student when the student encounters difficulty in learning.
Because the environment sound source monitored by the recording microphone contains background noise, the wearable equipment can firstly carry out pre-discrete sampling and quantification on the environment sound source containing the background noise to obtain a data frame, construct a wavelet neural network based on a Morlet wavelet function for the data frame, construct a particle swarm fitness function for parameters of the wavelet neural network, obtain the optimal parameters of the wavelet neural network through a particle swarm algorithm, input the data frame into the wavelet neural network for filtering, thereby removing the noise and extracting to obtain a voice signal; further, the wearable device can check whether the voiceprint features of the extracted voice signals are matched with the voiceprint features of the crying sound sources of the children in the database, if the voiceprint features of the extracted voice signals are matched with the voiceprint features of the crying sound sources of the children in the database, the wearable device plays target music, the target music is used for attracting and transferring the attention of the students, and the emotion of the students when the students encounter difficulty in learning is relieved. By implementing the embodiment, the adaptability to the noise characteristics of different environment sound sources can be improved.
Wherein, wearable equipment check-up whether the voiceprint characteristic of the speech signal who extracts matches with the voiceprint characteristic of the child's source of crying in the database includes:
the wearable device carries out preprocessing on the extracted voice signals, wherein the preprocessing comprises pre-emphasis, framing and windowing processing;
the wearable device extracts voiceprint features MFCC, LPCC, Δ MFCC, Δ LPCC, energy, first order difference of energy, and GFCC from the preprocessed voice signal to jointly form a first multi-dimensional feature vector, wherein: MFCC is a Mel frequency cepstrum coefficient, LPCC is a linear prediction cepstrum coefficient, Δ MFCC is a first order difference of MFCC, Δ LPCC is a first order difference of LPCC, and GFCC is a Gamma tone filter cepstrum coefficient;
and the wearable equipment judges whether the first multi-dimensional feature vector is completely matched with a second multi-dimensional vector corresponding to the voiceprint features of the crying sound source of the child in the database, and if so, the extracted voiceprint features of the voice signal are accurately verified to be matched with the voiceprint features of the crying sound source of the child in the database.
In the embodiment of the application, the target music can be music preset by the wearable device and used for attracting the attention of the student and relieving the emotion of the student when the student encounters difficulty in learning; or, the wearable device may also be music that is acquired from the cloud and used for attracting and transferring the attention of the student, and relieving the emotion generated when the student encounters difficulty in learning, which is not limited in the embodiment of the present application.
In the family education device described in fig. 4, the visual style of the main interface of the application program started by the user can be flexibly adjusted according to the instant emotion of the user, so that the interestingness can be improved, and the interest of the user (such as primary and secondary school students) in using the application program of the family education device can be promoted. In addition, the visual style of the main interface of the adjusted application program is a target visual style for improving the instant emotion of the user, so that the experience of the user using the application program of the family education device can be improved.
In addition, in the family education device described in fig. 4, the family education device may load a virtual animal matching the user gender of the external user on the main interface of the application program, so that the virtual animal may broadcast a response voice corresponding to the search voice issued by the external user according to the intonation for improving the instant emotion of the external user, thereby being beneficial to arousing the interest of the primary and secondary school students in searching for the questions.
In addition, in the home teaching apparatus described in fig. 4, the teacher can find the problems that some students have encountered in the subject (e.g., language) taught by the teacher according to the collected search words that some students have issued in the subject (e.g., language) taught by the teacher, so that the teacher can specifically explain the problems that the students have encountered to the students, thereby being beneficial to improving the learning efficiency of the students.
In addition, in the family education device described in fig. 4, the emotion of the student when difficulty is encountered in learning can be relieved.
EXAMPLE five
Referring to fig. 5, fig. 5 is a schematic structural diagram of another family education device disclosed in the embodiment of the present application. As shown in fig. 5, the family education device may include:
a memory 501 in which executable program code is stored;
a processor 502 coupled to a memory 501;
wherein, the processor 502 calls the executable program code stored in the memory 501 to execute the method described in fig. 1 or fig. 2.
An embodiment of the present application discloses a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute the method described in fig. 1 or fig. 2.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by hardware instructions of a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, such as a magnetic disk, or a combination thereof, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
The start control method of the application program and the family education device disclosed in the embodiment of the application are introduced in detail, a specific example is applied in the description to explain the principle and the implementation of the application, and the description of the embodiment is only used for helping to understand the method and the core idea of the application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (8)

1.一种应用程序的启动控制方法,其特征在于,所述方法包括:1. A startup control method for an application, wherein the method comprises: 家教设备侦听外界发出的语音信号;The tutoring equipment listens to the voice signal sent by the outside world; 所述家教设备检测所述语音信号是否包含用于开启所述家教设备的某一应用程序的关键词;The tutoring device detects whether the voice signal contains a keyword for opening a certain application program of the tutoring device; 若所述语音信号包含用于开启所述家教设备的某一应用程序的关键词,所述家教设备根据所述语音信号,提取所述语音信号的声音特征;根据所述语音信号的声音特征,识别出发出所述语音信号的外界用户的用户性别;If the voice signal contains a keyword used to start a certain application program of the tutoring device, the tutoring device extracts the sound feature of the voice signal according to the voice signal; according to the sound feature of the voice signal, identifying the user gender of the external user who sent the voice signal; 所述家教设备获取所述外界用户的即时情绪;The tutoring device obtains the real-time emotion of the external user; 所述家教设备从所述应用程序预配置的若干种视觉风格中确定出用于改善所述即时情绪的目标视觉风格;The tutoring device determines a target visual style for improving the instant mood from several visual styles preconfigured by the application; 所述家教设备开启所述应用程序并按照所述目标视觉风格加载所述应用程序的主界面,并在所述应用程序的主界面上输出与所述用户性别相匹配的虚拟动物;The tutoring device starts the application and loads the main interface of the application according to the target visual style, and outputs a virtual animal matching the gender of the user on the main interface of the application; 所述家教设备控制所述虚拟动物按照用于改善所述即时情绪的语调播报提示信息,所述提示信息用于提示已开启所述应用程序;The tutoring device controls the virtual animal to broadcast prompt information according to the tone used to improve the instant mood, and the prompt information is used to prompt that the application program has been opened; 所述方法还包括:The method also includes: 当所述应用程序为搜题应用程序时,所述家教设备侦听外界用户发出的搜题语音,并获取所述搜题语音对应的搜题结果;When the application is a question-searching application, the tutoring device listens to the question-searching voice sent by the external user, and obtains the question-searching result corresponding to the question-searching voice; 所述家教设备查询是否存储有所述语音信号的声音特征对应的学生个人属性信息;其中,所述学生个人属性信息至少包括学生身份信息和学生对应的目标课程表,所述目标课程表至少包括学生学习的多个不同的学习科目以及每一个所述学习科目对应的教师终端标识;The tutoring device inquires whether the student's personal attribute information corresponding to the sound feature of the voice signal is stored; wherein, the student's personal attribute information at least includes the student's identity information and the target curriculum corresponding to the student, and the target curriculum at least includes A plurality of different study subjects studied by the student and the teacher terminal identifier corresponding to each of the study subjects; 如果存储有所述语音信号的声音特征对应的学生个人属性信息,所述家教设备以所述搜题结果为依据,从所述目标课程表中查询出所述搜题结果对应的目标学习科目;以及,从所述目标课程表中查询出所述目标学习科目对应的目标教师终端标识;If the personal attribute information of the student corresponding to the sound feature of the voice signal is stored, the tutoring device uses the search result as a basis to query the target learning subject corresponding to the search result from the target curriculum; And, query the target teacher terminal identifier corresponding to the target learning subject from the target curriculum table; 所述家教设备将所述搜题语音转换成搜题文字,并将所述学生身份信息、所述搜题文字以及所述目标教师终端标识上报给云平台,以使所述云平台根据所述目标教师终端标识,将所述学生身份信息和所述搜题文字发送给所述目标教师终端标识所属的教师终端;其中,所述目标教师终端标识所属的教师终端在收到所述学生身份信息和所述搜题文字之后,提示教师奖励该学生身份信息所属的学生用于鼓励学生学习的虚拟礼物,并且将该虚拟礼物发送给云平台,使云平台将该虚拟礼物发送给所述家教设备;其中,所述云平台检测已推送给家教设备的虚拟礼物的总数量是否超过指定数量,如果超过指定数量,确定已推送给家教设备的虚拟礼物的总数量对应的尚未开通的搜题应用程序中的用户功能,并为家教设备开通该用户功能。The tutoring device converts the voice of the search question into the search text, and reports the student identity information, the search text and the target teacher terminal identification to the cloud platform, so that the cloud platform can perform the search according to the target teacher terminal identification, and send the student identification information and the search text to the teacher terminal to which the target teacher terminal identification belongs; wherein, the teacher terminal to which the target teacher terminal identification belongs receives the student identification information After and the search text, prompt the teacher to reward the student whose identity information belongs to the virtual gift used to encourage students to study, and send the virtual gift to the cloud platform, so that the cloud platform sends the virtual gift to the tutoring device Wherein, the cloud platform detects whether the total number of virtual gifts pushed to the tutoring device exceeds the specified number, and if it exceeds the specified number, it is determined that the total number of virtual gifts that have been pushed to the tutoring device has not yet been opened. and activate the user function for the tutoring device. 2.根据权利要求1所述的启动控制方法,其特征在于,所述家教设备获取所述外界用户的即时情绪,包括:2. The startup control method according to claim 1, wherein the tutoring device obtains the real-time emotion of the external user, comprising: 所述家教设备根据所述语音信号的语调,识别出所述外界用户的即时情绪;The tutoring device recognizes the real-time emotion of the external user according to the tone of the voice signal; 或者,所述家教设备获取所述外界用户的即时情绪,包括:Alternatively, the tutoring device obtains the real-time emotion of the external user, including: 所述家教设备检测当前是否无线连接有可穿戴设备;The tutoring device detects whether a wearable device is currently wirelessly connected; 如果所述家教设备当前无线连接有可穿戴设备,所述家教设备获取所述可穿戴设备的佩戴者的声音特征;If the tutoring device is currently wirelessly connected with a wearable device, the tutoring device acquires the voice characteristics of the wearer of the wearable device; 所述家教设备验证所述可穿戴设备的佩戴者的声音特征与所述语音信号的声音特征是否匹配;The tutoring device verifies whether the voice characteristics of the wearer of the wearable device match the voice characteristics of the voice signal; 如果匹配,所述家教设备通知所述可穿戴设备获取所述可穿戴设备的佩戴者的当前心率数据和所述佩戴者的当前血压数据,并由所述可穿戴设备根据所述佩戴者的当前心率数据和所述佩戴者的当前血压数据确定出所述佩戴者的即时情绪;If there is a match, the tutoring device notifies the wearable device to obtain the wearer's current heart rate data and the wearer's current blood pressure data, and the wearable device obtains the wearer's current heart rate data and the wearer's current blood pressure data according to the wearer's current heart rate data. heart rate data and the wearer's current blood pressure data to determine the wearer's immediate mood; 所述家教设备接收所述可穿戴设备发送的所述佩戴者的即时情绪的标识,以获得外界用户的即时情绪。The tutoring device receives the identification of the wearer's instant mood sent by the wearable device, so as to obtain the instant mood of the external user. 3.根据权利要求2所述的启动控制方法,其特征在于,所述方法还包括:3. The startup control method according to claim 2, wherein the method further comprises: 所述家教设备将所述搜题语音对应的搜题结果输出至所述应用程序的主界面上进行显示;The tutoring device outputs the search result corresponding to the search voice to the main interface of the application program for display; 以及,所述家教设备控制所述虚拟动物按照用于改善所述即时情绪的语调播报与所述外界用户发出的所述搜题语音相对应的回应语音。And, the tutoring device controls the virtual animal to broadcast a response voice corresponding to the question-searching voice uttered by the external user according to the tone used to improve the instant mood. 4.一种家教设备,其特征在于,包括:4. a kind of tutoring equipment, is characterized in that, comprises: 侦听单元,用于侦听外界发出的语音信号;The listening unit is used to listen to the voice signal sent by the outside world; 检测单元,用于检测所述语音信号是否包含用于开启所述家教设备的某一应用程序的关键词;a detection unit, configured to detect whether the voice signal contains a keyword for opening a certain application program of the tutoring device; 识别单元,用于在所述检测单元的检测结果为是时,根据所述语音信号,提取所述语音信号的声音特征;以及,根据所述语音信号的声音特征,识别出发出所述语音信号的外界用户的用户性别;an identification unit, configured to extract the sound feature of the voice signal according to the voice signal when the detection result of the detection unit is yes; and, according to the sound feature of the voice signal, identify and send out the voice signal the user gender of the external user; 第一获取单元,用于获取所述外界用户的即时情绪;a first acquiring unit, configured to acquire the real-time emotion of the external user; 确定单元,用于从所述应用程序预配置的若干种视觉风格中确定出用于改善所述即时情绪的目标视觉风格;a determining unit, configured to determine a target visual style for improving the instant mood from several visual styles preconfigured by the application; 第一控制单元,用于开启所述应用程序并按照所述目标视觉风格加载所述应用程序的主界面,并在所述应用程序的主界面上输出与所述用户性别相匹配的虚拟动物;a first control unit, configured to start the application and load the main interface of the application according to the target visual style, and output a virtual animal matching the gender of the user on the main interface of the application; 第二控制单元,用于控制所述虚拟动物按照用于改善所述即时情绪的语调播报提示信息,所述提示信息用于提示已开启所述应用程序;a second control unit, configured to control the virtual animal to broadcast prompt information according to the tone used to improve the instant mood, and the prompt information is used to prompt that the application program has been opened; 当所述应用程序为搜题应用程序时,所述侦听单元还用于侦听外界用户发出的搜题语音;When the application is a question-searching application, the listening unit is further configured to listen to the question-searching voice sent by external users; 所述家教设备还包括:The tutoring equipment also includes: 第二获取单元,用于获取所述搜题语音对应的搜题结果;a second obtaining unit, configured to obtain the search result corresponding to the voice of the search question; 查询单元,用于查询是否存储有所述语音信号的声音特征对应的学生个人属性信息;其中,所述学生个人属性信息至少包括学生身份信息和学生对应的目标课程表,所述目标课程表至少包括学生学习的多个不同的学习科目以及每一个所述学习科目对应的教师终端标识;以及,如果存储有所述语音信号的声音特征对应的学生个人属性信息,以所述搜题结果为依据,从所述目标课程表中查询出所述搜题结果对应的目标学习科目;以及,从所述目标课程表中查询出所述目标学习科目对应的目标教师终端标识;A query unit for querying whether the student's personal attribute information corresponding to the sound feature of the voice signal is stored; wherein, the student's personal attribute information at least includes the student's identity information and the target curriculum corresponding to the student, and the target curriculum at least Including a plurality of different learning subjects studied by students and the corresponding teacher terminal identification of each said learning subject; , query the target learning subject corresponding to the search result from the target curriculum; and query the target teacher terminal identifier corresponding to the target learning subject from the target curriculum; 转换单元,用于将所述搜题语音转换成搜题文字,并将所述学生身份信息、所述搜题文字以及所述目标教师终端标识上报给云平台,以使所述云平台根据所述目标教师终端标识,将所述学生身份信息和所述搜题文字发送给所述目标教师终端标识所属的教师终端;其中,所述目标教师终端标识所属的教师终端在收到所述学生身份信息和所述搜题文字之后,提示教师奖励该学生身份信息所属的学生用于鼓励学生学习的虚拟礼物,并且将该虚拟礼物发送给云平台,使云平台将该虚拟礼物发送给所述家教设备;其中,所述云平台检测已推送给家教设备的虚拟礼物的总数量是否超过指定数量,如果超过指定数量,确定已推送给家教设备的虚拟礼物的总数量对应的尚未开通的搜题应用程序中的用户功能,并为家教设备开通该用户功能。The conversion unit is used to convert the voice of the search question into the search text, and report the student identity information, the search text and the target teacher terminal identification to the cloud platform, so that the cloud platform can the target teacher terminal identifier, and send the student identity information and the search text to the teacher terminal to which the target teacher terminal identifier belongs; wherein, the teacher terminal to which the target teacher terminal identifier belongs will receive the student identity After the information and the search text, the teacher is prompted to reward the student whose identity information belongs to a virtual gift used to encourage students to study, and the virtual gift is sent to the cloud platform, so that the cloud platform sends the virtual gift to the tutor device; wherein, the cloud platform detects whether the total number of virtual gifts that have been pushed to the tutoring device exceeds the specified number, and if it exceeds the specified number, it is determined that the total number of virtual gifts that have been pushed to the tutoring device corresponds to the unopened search question application. The user function in the program, and activate the user function for the tutoring device. 5.根据权利要求4所述的家教设备,其特征在于,所述第一获取单元具体用于根据所述语音信号的语调,识别出所述外界用户的即时情绪;5. The tutoring device according to claim 4, wherein the first acquisition unit is specifically configured to identify the real-time emotion of the external user according to the intonation of the voice signal; 或者,所述第一获取单元包括:Alternatively, the first obtaining unit includes: 检测子单元,用于检测所述家教设备当前是否无线连接有可穿戴设备;a detection subunit, used to detect whether the tutoring device is currently wirelessly connected with a wearable device; 获取子单元,用于在所述检测子单元检测出所述家教设备当前无线连接有可穿戴设备时,获取所述可穿戴设备的佩戴者的声音特征;an acquisition subunit, configured to acquire the voice feature of the wearer of the wearable device when the detection subunit detects that the tutoring device is currently wirelessly connected to a wearable device; 验证子单元,用于验证所述可穿戴设备的佩戴者的声音特征与所述语音信号的声音特征是否匹配;a verification sub-unit for verifying whether the sound feature of the wearer of the wearable device matches the sound feature of the voice signal; 交互子单元,用于在所述验证子单元的验证结果为匹配时,通知所述可穿戴设备获取所述可穿戴设备的佩戴者的当前心率数据和所述佩戴者的当前血压数据,并由所述可穿戴设备根据所述佩戴者的当前心率数据和所述佩戴者的当前血压数据确定出所述佩戴者的即时情绪;The interaction subunit is used to notify the wearable device to obtain the current heart rate data of the wearer of the wearable device and the current blood pressure data of the wearer when the verification result of the verification subunit is a match, and use the The wearable device determines the wearer's real-time emotion according to the wearer's current heart rate data and the wearer's current blood pressure data; 所述交互子单元,还用于接收所述可穿戴设备发送的所述佩戴者的即时情绪的标识,以获得外界用户的即时情绪。The interaction subunit is further configured to receive an identifier of the wearer's instant mood sent by the wearable device, so as to obtain the instant mood of an external user. 6.根据权利要求5所述的家教设备,其特征在于,所述第一控制单元,还用于将所述搜题语音对应的搜题结果输出至所述应用程序的主界面进行显示;6. The tutoring device according to claim 5, wherein the first control unit is further configured to output the search result corresponding to the search voice to the main interface of the application program for display; 所述第二控制单元,还用于控制所述虚拟动物按照用于改善所述即时情绪的语调播报与所述外界用户发出的所述搜题语音相对应的回应语音。The second control unit is further configured to control the virtual animal to broadcast a response voice corresponding to the question-searching voice uttered by the external user according to the tone used to improve the instant mood. 7.一种家教设备,其特征在于,包括:7. A tutoring device, characterized in that, comprising: 存储有可执行程序代码的存储器;a memory in which executable program code is stored; 与所述存储器耦合的处理器;a processor coupled to the memory; 所述处理器调用所述存储器中存储的所述可执行程序代码,执行权利要求1-3任一项所述应用程序的启动控制方法。The processor invokes the executable program code stored in the memory to execute the application startup control method according to any one of claims 1-3. 8.一种计算机可读存储介质,其存储计算机程序,其特征在于,所述计算机程序运行时使得计算机执行权利要求1-3任一项所述应用程序的启动控制方法。8 . A computer-readable storage medium storing a computer program, wherein, when the computer program runs, the computer executes the application program startup control method according to claim 1 .
CN201810816758.0A 2018-07-24 2018-07-24 Application program starting control method and family education equipment Active CN108984229B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810816758.0A CN108984229B (en) 2018-07-24 2018-07-24 Application program starting control method and family education equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810816758.0A CN108984229B (en) 2018-07-24 2018-07-24 Application program starting control method and family education equipment

Publications (2)

Publication Number Publication Date
CN108984229A CN108984229A (en) 2018-12-11
CN108984229B true CN108984229B (en) 2021-11-26

Family

ID=64550297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810816758.0A Active CN108984229B (en) 2018-07-24 2018-07-24 Application program starting control method and family education equipment

Country Status (1)

Country Link
CN (1) CN108984229B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110010173A1 (en) * 2009-07-13 2011-01-13 Mark Scott System for Analyzing Interactions and Reporting Analytic Results to Human-Operated and System Interfaces in Real Time
US20130152000A1 (en) * 2011-12-08 2013-06-13 Microsoft Corporation Sentiment aware user interface customization
CN104202718A (en) * 2014-08-05 2014-12-10 百度在线网络技术(北京)有限公司 Method and device for providing information for user
CN105930035A (en) * 2016-05-05 2016-09-07 北京小米移动软件有限公司 Interface background display method and apparatus
US20160275949A1 (en) * 2013-03-14 2016-09-22 Microsoft Technology Licensing, Llc Voice command definitions used in launching application with a command
CN106201169A (en) * 2016-06-23 2016-12-07 广东小天才科技有限公司 Human-computer interaction learning method and device and terminal equipment
CN106782493A (en) * 2016-11-28 2017-05-31 湖北第二师范学院 A kind of children private tutor's machine personalized speech control and VOD system
CN106803423A (en) * 2016-12-27 2017-06-06 智车优行科技(北京)有限公司 Man-machine interaction sound control method, device and vehicle based on user emotion state
CN106874265A (en) * 2015-12-10 2017-06-20 深圳新创客电子科技有限公司 A kind of content outputting method matched with user emotion, electronic equipment and server
CN107025046A (en) * 2016-01-29 2017-08-08 阿里巴巴集团控股有限公司 Terminal applies voice operating method and system
CN107329990A (en) * 2017-06-06 2017-11-07 北京光年无限科技有限公司 A kind of mood output intent and dialogue interactive system for virtual robot
CN107452400A (en) * 2017-07-24 2017-12-08 珠海市魅族科技有限公司 Voice broadcast method and device, computer installation and computer-readable recording medium
CN107463684A (en) * 2017-08-09 2017-12-12 珠海市魅族科技有限公司 Voice replying method and device, computer installation and computer-readable recording medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110010173A1 (en) * 2009-07-13 2011-01-13 Mark Scott System for Analyzing Interactions and Reporting Analytic Results to Human-Operated and System Interfaces in Real Time
US20130152000A1 (en) * 2011-12-08 2013-06-13 Microsoft Corporation Sentiment aware user interface customization
US20160275949A1 (en) * 2013-03-14 2016-09-22 Microsoft Technology Licensing, Llc Voice command definitions used in launching application with a command
CN104202718A (en) * 2014-08-05 2014-12-10 百度在线网络技术(北京)有限公司 Method and device for providing information for user
CN106874265A (en) * 2015-12-10 2017-06-20 深圳新创客电子科技有限公司 A kind of content outputting method matched with user emotion, electronic equipment and server
CN107025046A (en) * 2016-01-29 2017-08-08 阿里巴巴集团控股有限公司 Terminal applies voice operating method and system
CN105930035A (en) * 2016-05-05 2016-09-07 北京小米移动软件有限公司 Interface background display method and apparatus
CN106201169A (en) * 2016-06-23 2016-12-07 广东小天才科技有限公司 Human-computer interaction learning method and device and terminal equipment
CN106782493A (en) * 2016-11-28 2017-05-31 湖北第二师范学院 A kind of children private tutor's machine personalized speech control and VOD system
CN106803423A (en) * 2016-12-27 2017-06-06 智车优行科技(北京)有限公司 Man-machine interaction sound control method, device and vehicle based on user emotion state
CN107329990A (en) * 2017-06-06 2017-11-07 北京光年无限科技有限公司 A kind of mood output intent and dialogue interactive system for virtual robot
CN107452400A (en) * 2017-07-24 2017-12-08 珠海市魅族科技有限公司 Voice broadcast method and device, computer installation and computer-readable recording medium
CN107463684A (en) * 2017-08-09 2017-12-12 珠海市魅族科技有限公司 Voice replying method and device, computer installation and computer-readable recording medium

Also Published As

Publication number Publication date
CN108984229A (en) 2018-12-11

Similar Documents

Publication Publication Date Title
US20200126566A1 (en) Method and apparatus for voice interaction
US11475897B2 (en) Method and apparatus for response using voice matching user category
Frick Communicating emotion: The role of prosodic features.
US7940914B2 (en) Detecting emotion in voice signals in a call center
US7590538B2 (en) Voice recognition system for navigating on the internet
EP1125280B1 (en) Detecting emotion in voice signals through analysis of a plurality of voice signal parameters
TW548631B (en) System, method, and article of manufacture for a voice recognition system for identity authentication in order to gain access to data on the Internet
US20020002460A1 (en) System method and article of manufacture for a voice messaging expert system that organizes voice messages based on detected emotions
CN108806686B (en) Starting control method of voice question searching application and family education equipment
IL148414A (en) System and method for a telephonic emotion detection that provides operator feedback
CN113035232B (en) Psychological state prediction system, method and device based on voice recognition
KR102314213B1 (en) System and Method for detecting MCI based in AI
CN108961887A (en) Voice search control method and family education equipment
JP2006061632A (en) Emotion data supplying apparatus, psychology analyzer, and method for psychological analysis of telephone user
JP6915637B2 (en) Information processing equipment, information processing methods, and programs
Qadri et al. A critical insight into multi-languages speech emotion databases
CN115329057A (en) Voice interaction method and device, electronic equipment and storage medium
CN108984229B (en) Application program starting control method and family education equipment
CN108648545A (en) New word reviewing method applied to family education equipment and family education equipment
CN108984742A (en) Man-machine interaction method in black screen state and family education equipment
Eriksson That voice sounds familiar: Factors in speaker recognition
CN114913974A (en) Delirium evaluation method, delirium evaluation device, electronic equipment and storage medium
CN111640447B (en) Method for reducing noise of audio signal and terminal equipment
Zheng et al. The extraction method of emotional feature based on children's spoken speech
CN115910111A (en) Voice interaction method and device, intelligent equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant