Detailed Description
The embodiment of the application provides a method and a system for testing the speech perception ability of a child using an artificial cochlea, and solves the technical problem that a quantitative and qualitative evaluation means aiming at the auditory masking effect of the child using the artificial cochlea in a multi-person competitive context is lacked in the prior art. The method comprises the steps of simulating various contexts by acquiring various types of masking sounds, then respectively acquiring masking effects of children on the various types of masking sounds by using the cochlear implant, further dividing the masking effects into energy masking effects and information masking effects by using a masking separation model, respectively evaluating the energy masking effects and the information masking effects to obtain evaluation grades, and then evaluating the effect proportion of the energy masking effects and the information masking effects according to the weight ratio of the evaluation grades. The evaluation result can be expressed by the evaluation level and the weight ratio of the energy masking effect and the information masking effect, and the technical effect of qualitatively and quantitatively evaluating the auditory masking effect of the cochlear implant using children in a multi-person competitive context is achieved.
Summary of the application
The artificial cochlea is an electronic device, converts sound into an electric signal in a certain coding form by an external speech processor, directly excites auditory nerves through an electrode system implanted in a human body to recover, improve and reconstruct the auditory function of a deaf person, and is the most successful biomedical engineering device applied at present. The usage of the cochlear implant in deaf-mute children is a hot application object, and noisy scenes in daily environments require the cochlear implant to have good auditory masking effect on multi-person conversation, namely, in a multi-person competitive context, so that the evaluation of the auditory masking effect has very important significance for improving the auditory masking effect in the multi-person competitive context. Currently, the known technology is basically to evaluate through simulating a living environment and through feedback of a cochlear prosthesis user, but information fed back by children is not comprehensive, and a qualitative and quantitative evaluation result is not obtained, but the technical problem that a quantitative and qualitative evaluation means aiming at auditory masking effect of the cochlear prosthesis using children in a multi-person competitive context is lacked exists in the prior art.
In view of the above technical problems, the technical solution provided by the present application has the following general idea:
the embodiment of the application provides a speech perception capability test method for a child using an artificial cochlea, wherein the method comprises the following steps: obtaining a first type masking sound and a second type masking sound, wherein the first type masking sound and the second type masking sound are different; obtaining a first masking effect of the first type of masking sound; obtaining a second masking effect of the second type of masking sound; inputting the first masking effect and the second masking effect into a masking separation model to obtain an energy masking effect and an information masking effect; inputting the energy masking effect into an energy masking speech perception capability evaluation model to obtain a first energy masking speech perception capability evaluation result; inputting the information masking effect into an information masking speech perception capability evaluation model to obtain a first information masking speech perception capability evaluation result; obtaining a first grade of the first energy masking speech perception capability assessment result; obtaining a second grade of the first information masking speech perception capability evaluation result; obtaining a first weight ratio according to the ratio of the first grade to the second grade; and obtaining a speech perception capability evaluation result of the first child according to the first weight ratio, the first energy masking speech perception capability evaluation result and the first information masking speech perception capability evaluation result.
Having thus described the general principles of the present application, various non-limiting embodiments thereof will now be described in detail with reference to the accompanying drawings.
Example one
As shown in fig. 1, an embodiment of the present application provides a speech perception capability test method for a child using a cochlear implant, wherein the method includes:
s100: obtaining a first type masking sound and a second type masking sound, wherein the first type masking sound and the second type masking sound are different;
specifically, noise masking refers to a phenomenon in which the threshold of a person's hearing for a sound is increased by the influence of noise, and the sound causing this phenomenon is referred to as a masking noise, that is, a masking sound as defined herein. The first type masking sound and the second type masking sound refer to masking noises under different contexts in the simulated daily life, such as steady-state background noise, dynamic noise, speech spectrum noise, multi-person speaking noise and the like; the first type of masking sound and the second type of masking sound cannot be the same type of noise.
S200: obtaining a first masking effect of the first type of masking sound;
s300: obtaining a second masking effect of the second type of masking sound;
specifically, the first masking effect refers to an amount of rise of another sound listening valve due to the first type of masking sound; the second masking effect refers to an amount of raising of the other sound listening valve due to the second type of masking sound. By way of example and not limitation: the masking effect can be evaluated using the auditory valve rise value and also using masking similarity, another sound being preferably a stationary closed phrase. The invariance of the target information ensures that the influence factors of the masking effect are only different types of masking sounds.
S400: inputting the first masking effect and the second masking effect into a masking separation model to obtain an energy masking effect and an information masking effect;
specifically, the energy masking effect refers to masking that occurs around the auditory sense, resulting from the superposition of the target signal and the masking signal on the frequency spectrum; the information masking effect refers to masking which occurs in the auditory center and is generated by the similarity of the target signal and the masking signal on the information pattern; the masking separation model refers to a neural network model capable of separating the first masking effect and the second masking effect into the energy masking effect and the information masking effect, the masking separation model is trained through a plurality of groups of masking effect information, corresponding energy masking effect identification information and information masking effect identification information, and supervised learning is finished when the output result of the masking separation model converges. The first masking effect and the second masking effect can be accurately separated from the energy masking effect and the information masking effect through the masking separation model, and evaluation variables can be reduced when the influence of different influencing factors on the information masking effect or the energy masking effect is further evaluated.
S500: inputting the energy masking effect into an energy masking speech perception capability evaluation model to obtain a first energy masking speech perception capability evaluation result;
specifically, the first energy-masking speech perception capability evaluation result is a scoring result obtained by inputting the energy masking effect into the energy-masking speech perception capability evaluation model through intelligent analysis, the energy-masking speech perception capability evaluation model is established on the basis of a neural network model and has the characteristics of the neural network model, wherein the artificial neural network is an abstract mathematical model which is proposed and developed on the basis of modern neuroscience and aims at reflecting the structure and the function of the human brain, the neural network is an operation model and is formed by a large number of nodes (or called neurons) which are connected with each other, each node represents a specific output function called an excitation function, the connection between every two nodes represents a weighted value for a signal passing through the connection, called a weight, and is equivalent to the memory of the artificial neural network, the output of the network is an expression of a logic strategy according to a network connection mode, and the energy masking speech perception capability evaluation model established based on the neural network model can output accurate information of the first energy masking speech perception capability evaluation result, so that the analysis and calculation capability is strong, and the accurate and efficient technical effect is achieved.
S600: inputting the information masking effect into an information masking speech perception capability evaluation model to obtain a first information masking speech perception capability evaluation result;
specifically, the first information masking speech perception capability evaluation result is a scoring result obtained by inputting the information masking effect into the information masking speech perception capability evaluation model through intelligent analysis, the energy masking speech perception capability evaluation model is a neural network model of the same type as the energy masking speech perception capability evaluation model, and the information masking speech perception capability evaluation model established based on the neural network model can output accurate information of the first information masking speech perception capability evaluation result, so that the information masking speech perception capability evaluation model has strong analysis and calculation capability, and an accurate and efficient technical effect is achieved.
S700: obtaining a first grade of the first energy masking speech perception capability assessment result;
s800: obtaining a second grade of the first information masking speech perception capability evaluation result;
specifically, the first class refers to a degree of masking degree of the energy masking effect of the first type masking sound and the second type masking sound on the target sound, and is obtained from the first energy masking speech perception capability evaluation result; the second grade refers to the information masking effect masking degree grade of the first type masking sound and the second type masking sound on the target sound, and can be obtained from the first information masking speech perception capability evaluation result. The energy masking effect and the information masking effect of the two types of masking sounds can be quantitatively evaluated through the first level and the second level, respectively.
S900: obtaining a first weight ratio according to the ratio of the first grade to the second grade;
s1000: and obtaining a speech perception capability evaluation result of the first child according to the first weight ratio, the first energy masking speech perception capability evaluation result and the first information masking speech perception capability evaluation result.
Specifically, the first weight ratio refers to calculating a ratio of the first level to the second level, and obtaining a ratio of the first level to the second level, preferably using a percentage for characterization, and the first weight ratio can be used to understand a ratio of the energy masking effect to the information masking effect in the context of the first type masking sound or the second type masking sound, so as to further improve the performance of the cochlear implant for researching the masking effect influence factors which can greatly influence the cochlear implant. Further, the masking effect is divided into the energy masking effect and the information masking effect to be evaluated respectively, the first energy masking speech perception capability evaluation result and the first information masking speech perception capability evaluation result are obtained, the influence degree of the first energy masking speech perception capability evaluation result and the first information masking speech perception capability evaluation result is calculated, the occupation ratio of the energy masking effect and the information masking in different contexts is obtained, the speech perception capability evaluation result of the first child is obtained, and a certain direction is provided for further research through speech perception capability evaluation of the child used by the artificial cochlea.
Further, based on the obtaining of the first masking effect of the first type of masking sound, step S300 further includes:
s310: obtaining speech intelligibility of the first type of masking sound;
s320: obtaining target sound keyword information;
s330: obtaining masking similarity according to the first type masking sound and the target sound keyword information;
s340: obtaining the first masking effect according to the language intelligibility and the masking similarity.
In particular, the language intelligibility of the first type of masking sound refers to the language clarity, i.e. the intelligibility degree, of the first type of masking sound, the higher the language intelligibility, the stronger the first masking effect; the target sound keyword information refers to a keyword for understanding a target sound, for example, if a certain closed short sentence is Zhang Sansanluncheon and rice is eaten, the name, the eating and the rice can be the target sound keyword information; the masking similarity refers to comparing the first type masking sound with the target keyword information, and determining the masking similarity according to the number of the repeated target keywords of the first type masking sound, wherein the larger the number is, the higher the masking similarity is, the better the first masking effect is. Characterizing the first masking effect by the speech intelligibility and the masking similarity facilitates a qualitative and quantitative processing of the masking effect. Furthermore, the second masking effect is determined in the same manner as described above.
Further, the method further includes step S1100:
s1110: obtaining first position information of a target sound;
s1120: obtaining second position information of the masking sound;
s1130: obtaining position difference information of the first position information and the second position information;
s1140: constructing a regression model of the spatial position and the information masking effect according to the incident angle information, the incident height information and the incident distance information;
s1150: obtaining a first space position influence parameter according to the position difference information and the regression model;
s1160: and correcting the first information masking speech perception capability evaluation result according to the first spatial position influence parameter to obtain a second information masking speech perception capability evaluation result.
Specifically, the first position information of the target sound refers to sound emission position information of the target sound; the second position information of the masking sound refers to sound emission position information of the masking sound; the position difference information of the first position information and the second position information refers to a distance between the sound emission position information of the target sound and the sound emission position information of the masking sound; the optional determination mode of the incident angle, the incident height information and the incident distance is as follows: capturing horizontal propagation time and vertical propagation time of the target sound and the masking sound by using a sound sensor, determining a horizontal incident distance and a vertical incident distance according to the propagation time, obtaining the incident distance, namely a hypotenuse, according to the pythagorean theorem, and further determining the incident angle and the incident height; further, according to the influence relationship of the incident angle, the incident height information and the incident distance on the information masking effect, a regression model of the spatial position and the information masking effect is established, and the influence relationship is determined according to a specific application example and is not limited herein; furthermore, according to the influence of the position difference information on the information masking effect, the regression model is combined to obtain the correlation between the spatial position and the information masking effect, that is, the correlation is the first spatial position influence parameter, generally speaking, under the condition that other variables are fixed, the smaller the position difference is, the smaller the first spatial position influence parameter is, and the specific numerical value is not limited herein. And correcting the first information masking speech perception capability evaluation result through the first spatial position influence parameter, wherein the correction mode is preferably that the larger the first spatial position influence parameter is, the higher the grade corresponding to the first information masking speech perception capability evaluation result is, and the corrected result is the second information masking speech perception capability evaluation result. The influence of the first spatial position on the information masking effect is added into the first information masking speech perception capability evaluation result, and the obtained second information masking speech perception capability evaluation result is more comprehensive in information, so that the evaluation result is more accurate.
Further, based on the obtaining the first position information of the target sound and the obtaining the second position information of the masking sound, the step S1140 further includes:
s1141: obtaining first incident angle information, first incident height information and first incident distance information of the target sound;
s1142: obtaining the first position information according to the first incident angle information, the first incident height information and the first incident distance information;
s1143: obtaining second incident angle information, second incident height information and second incident distance information of the masking sound;
s1144: and obtaining the second position information according to the second incident angle information, the second incident height information and the second incident distance information.
Specifically, according to first incident angle information, first incident height information, and first incident distance information of the target sound, a horizontal distance between the target sound and the child using the cochlear implant can be determined according to that the first incident angle is an acute angle, the first incident height is a right-angled side, and the first incident distance is a hypotenuse, and further, spatial position information of the target sound, that is, the first position information can be obtained. The second position information of the masking sound is preferably determined in the same manner as the first position information. The information masking effect can be evaluated qualitatively and quantitatively by quantizing the positions of the target sound and the masking sound through the first position information and the second position information.
Further, the method further includes S1200:
s1210: obtaining first time information of a target sound reaching two ears;
s1220: obtaining second time information of the masking sound reaching two ears;
s1230: obtaining time difference information of the first time information and the second time information;
s1240: obtaining first intensity information of a target sound reaching two ears;
s1250: obtaining second intensity information of the masking sound reaching two ears;
s1260: obtaining intensity difference information of the first intensity information and the second intensity information;
s1270: obtaining a first intensity difference influence parameter according to the intensity difference information;
s1280: and correcting the first information masking speech perception capability evaluation result according to the first intensity difference influence parameter to obtain a third information masking speech perception capability evaluation result.
Specifically, the first time information refers to recording the travel time of the target sound to both ears, and preferably recording the travel time by using a sound sensor; the second time information refers to recording the propagation time of the masking sound reaching the ears, and the determination mode is the same as the first time information; the first intensity information refers to the sound intensity of the target sound when reaching both ears, and the second intensity information refers to the sound intensity of the masking sound when reaching both ears, preferably using decibel representation, determined using a sound sensor. Further, a difference between the first intensity information and the second intensity information is calculated to obtain the intensity difference information. Furthermore, the larger the intensity difference is, the more obvious the influence on the masking effect is, and the first intensity difference influence parameter is obtained according to the relation. Furthermore, the first information masking speech perception capability evaluation result is corrected according to the first intensity difference influence parameter, and a third information masking speech perception capability evaluation result is obtained. The evaluation influence factors are supplemented, and the technical effect of enabling the evaluation result to be more accurate is achieved.
Further, the method further includes step S1300:
s1310: obtaining the information of the number of the masking sounds;
s1320: carrying out vector conversion on the quantity information of the masking sound to obtain a vector mode corresponding to the quantity information;
s1330: obtaining a first quantity of influence parameters according to the vector mode;
s1340: and correcting the first information masking speech perception capability evaluation result according to the first quantity influence parameter to obtain a fourth information masking speech perception capability evaluation result.
Specifically, the information on the number of masking sounds refers to the number of utterances of the masking sounds, and the masking sounds are determined by optionally using the number of different decibel intervals; the vector mode corresponding to the quantity information refers to that the quantity information of the masking sound is identified by using a vector, the quantity corresponds to the vector mode, and the direction of the masking sound corresponds to the direction of the vector; and establishing the first quantity influence parameter according to the influence of the quantity of the masking sounds on the information masking effect, wherein in general, the larger the vector modulus is, the more obvious the information masking effect is. Furthermore, the first information masking speech perception capability evaluation result is corrected according to the first quantity influence parameter to obtain a fourth information masking speech perception capability evaluation result, and the influence of the quantity of masking sounds on the information masking effect is added to the first information masking speech perception capability evaluation result through the first quantity influence parameter, so that the fourth information masking speech perception capability evaluation result is more accurate and comprehensive.
Further, based on the evaluation model for inputting the energy masking effect into the energy masking speech perception capability, a first energy masking speech perception capability evaluation result is obtained, and the steps S500 and S600 further include:
s510: training a neural network model by using an energy masking effect in historical data as a training data set until the model reaches a convergence state, and constructing an energy masking speech perception capability evaluation model;
s520: inputting the energy masking effect as input information into the energy masking speech perception capability evaluation model;
s530: and obtaining output information of the energy masking speech perception capability evaluation model, wherein the output information comprises the first energy masking speech perception capability evaluation result.
Specifically, the energy masking speech perception capability evaluation model is also a neural network model, namely a neural network model in machine learning, reflects many basic characteristics of human brain functions, and is a highly complex nonlinear dynamical learning system. The neural network model training method can continuously perform self-training learning according to training data, and train the neural network model by taking an energy masking effect in historical data as a training data set. And the energy masking speech perception capability evaluation model continuously corrects itself, and when the output information of the first intelligent evaluation model reaches a preset accuracy rate/convergence state, the supervised learning process is ended. By carrying out data training on the energy masking speech perception capability evaluation model, the energy masking speech perception capability evaluation model can process input data more accurately, and further the output first energy masking speech perception capability evaluation result information is more accurate, so that the technical effects of accurately obtaining data information and improving the intellectualization of the evaluation result are achieved. In addition, the information masking speech perception capability evaluation model is preferably constructed by the same principle.
In summary, the method and the system for testing the speech perception ability of the child using the cochlear implant provided by the embodiment of the application have the following technical effects:
1. obtaining a first type of masking sound and a second type of masking sound, wherein the first type of masking sound and the second type of masking sound are different; obtaining a first masking effect of the first type of masking sound; obtaining a second masking effect of the second type of masking sound; inputting the first masking effect and the second masking effect into a masking separation model to obtain an energy masking effect and an information masking effect; inputting the energy masking effect into an energy masking speech perception capability evaluation model to obtain a first energy masking speech perception capability evaluation result; inputting the information masking effect into an information masking speech perception capability evaluation model to obtain a first information masking speech perception capability evaluation result; obtaining a first grade of the first energy masking speech perception capability assessment result; obtaining a second grade of the first information masking speech perception capability evaluation result; obtaining a first weight ratio according to the ratio of the first grade to the second grade; according to the first weight ratio, the first energy masking speech perception capability evaluation result and the first information masking speech perception capability evaluation result, obtaining a speech perception capability evaluation result of a first child, simulating a multi-person competition context by obtaining multiple masking sounds, respectively obtaining masking effects of the child using the cochlear implant on the multiple masking sounds, further dividing the masking effects into energy masking effects and information masking effects by using a masking separation model, respectively evaluating the energy masking effects and the information masking effects to obtain evaluation grades, and then evaluating the effect proportion of the energy masking effects and the information masking effects according to the weight ratio of the evaluation grades. The evaluation result can be expressed by the evaluation level and the weight ratio of the energy masking effect and the information masking effect, and the technical effect of qualitatively and quantitatively evaluating the auditory masking effect of the cochlear implant using children in a multi-person competitive context is achieved.
2. The influence of the first spatial position influence parameter, the first intensity difference influence parameter and the first quantity influence parameter on the information masking effect is added into the first information masking speech perception capability evaluation result, so that the masking speech perception capability evaluation result is more accurate and comprehensive.
Example two
Based on the same inventive concept as the speech perception capability test method for children using cochlear implants in the foregoing embodiments, as shown in fig. 2, embodiments of the present application provide a speech perception capability test system for children using cochlear implants, wherein the system includes:
a first obtaining unit 11, configured to obtain a first type masking sound and a second type masking sound, where the first type masking sound and the second type masking sound are different;
a second obtaining unit 12, the second obtaining unit 12 being configured to obtain a first masking effect of the first type masking sound;
a third obtaining unit 13, the third obtaining unit 13 being configured to obtain a second masking effect of the second type of masking sound;
a fourth obtaining unit 14, where the fourth obtaining unit 14 is configured to input the first masking effect and the second masking effect into a masking separation model, and obtain an energy masking effect and an information masking effect;
a fifth obtaining unit 15, where the fifth obtaining unit 15 is configured to input the energy masking effect into an energy masking speech perception capability evaluation model to obtain a first energy masking speech perception capability evaluation result;
a sixth obtaining unit 16, where the sixth obtaining unit 16 is configured to input the information masking effect into an information masking speech perception capability evaluation model, and obtain a first information masking speech perception capability evaluation result;
a seventh obtaining unit 17, wherein the seventh obtaining unit 17 obtains the first level of the first energy masking speech perception capability evaluation result;
an eighth obtaining unit 18, where the eighth obtaining unit 18 obtains a second level of the first information masking speech perception capability evaluation result;
a ninth obtaining unit 19, wherein the ninth obtaining unit 19 obtains a first weight ratio according to a ratio of the first level and the second level;
a tenth obtaining unit 20, where the tenth obtaining unit 20 obtains the speech perception capability evaluation result of the first child according to the first weight ratio, the first energy masking speech perception capability evaluation result, and the first information masking speech perception capability evaluation result.
Further, the system further comprises:
an eleventh obtaining unit for obtaining language intelligibility of the first type of masking sound;
a twelfth obtaining unit configured to obtain target acoustic keyword information;
a thirteenth obtaining unit, configured to obtain a masking similarity according to the first type masking sound and the target sound keyword information;
a fourteenth obtaining unit for obtaining the first masking effect according to the speech intelligibility and the masking similarity.
Further, the system further comprises:
a fifteenth obtaining unit configured to obtain first position information of a target sound;
a sixteenth obtaining unit that obtains second position information of the masking sound;
a seventeenth obtaining unit configured to obtain position difference information of the first position information and the second position information;
an eighteenth obtaining unit, configured to construct a regression model of the spatial position and the information masking effect according to the incident angle information, the incident height information, and the incident distance information;
a nineteenth obtaining unit, configured to obtain a first spatial position influence parameter according to the position difference information and the regression model;
and the twentieth obtaining unit is used for correcting the first information masking speech perception capability evaluation result according to the first spatial position influence parameter to obtain a second information masking speech perception capability evaluation result.
Further, the system further comprises:
a twenty-first obtaining unit configured to obtain first incident angle information, first incident height information, and first incident distance information of the target sound;
a twenty-second obtaining unit configured to obtain the first position information according to the first incident angle information, the first incident height information, and the first incident distance information;
a twenty-third obtaining unit configured to obtain second incident angle information, second incident height information, and second incident distance information of the masking sound;
a twenty-fourth obtaining unit, configured to obtain the second position information according to the second incident angle information, the second incident height information, and the second incident distance information.
Further, the system further comprises:
a twenty-fifth obtaining unit, configured to obtain first time information when a target sound reaches both ears;
a twenty-sixth obtaining unit configured to obtain second time information when the masking sound reaches both ears;
a twenty-seventh obtaining unit configured to obtain time difference information of the first time information and the second time information;
a twenty-eighth obtaining unit configured to obtain first intensity information of a target sound reaching both ears;
a twenty-ninth obtaining unit configured to obtain second intensity information of the masking sound reaching both ears;
a thirtieth obtaining unit configured to obtain intensity difference information of the first intensity information and the second intensity information;
a thirty-first obtaining unit, configured to obtain a first intensity difference influence parameter according to the intensity difference information;
a thirty-second obtaining unit, configured to correct the first information masking speech perception capability evaluation result according to the first intensity difference influence parameter, and obtain a third information masking speech perception capability evaluation result.
Further, the system further comprises:
a thirty-third obtaining unit configured to obtain information on the number of masking sounds;
a thirty-fourth obtaining unit, configured to perform vector conversion on the amount information of the masking sound, and obtain a vector modulus corresponding to the amount information;
a thirty-fifth obtaining unit, configured to obtain a first number of impact parameters according to the vector norm;
and the first correcting unit is used for correcting the first information masking speech perception capability evaluation result according to the first quantity influence parameter to obtain a fourth information masking speech perception capability evaluation result.
Further, the system further comprises:
the first construction unit is used for training the neural network model by taking an energy masking effect in historical data as a training data set until the model reaches a convergence state, and constructing the energy masking speech perception capability evaluation model;
a first input unit for inputting the energy masking effect as input information into the energy masking speech perception capability evaluation model;
a first output unit, configured to obtain output information of the energy masking speech perception capability assessment model, where the output information includes a result of the first energy masking speech perception capability assessment.
Exemplary electronic device
The electronic device of the embodiment of the present application is described below with reference to figure 3,
based on the same inventive concept as the speech perception capability test method for the child using the cochlear implant in the foregoing embodiment, the embodiment of the present application further provides a speech perception capability test system for the child using the cochlear implant, comprising: a processor coupled to a memory for storing a program that, when executed by the processor, causes a system to perform the method of any of the first aspects
The electronic device 300 includes: processor 302, communication interface 303, memory 301. Optionally, the electronic device 300 may also include a bus architecture 304. Wherein, the communication interface 303, the processor 302 and the memory 301 may be connected to each other through a bus architecture 304; the bus architecture 304 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus architecture 304 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 3, but this does not mean only one bus or one type of bus.
Processor 302 may be a CPU, microprocessor, ASIC, or one or more integrated circuits for controlling the execution of programs in accordance with the teachings of the present application.
The communication interface 303 may be any device, such as a transceiver, for communicating with other devices or communication networks, such as an ethernet, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), a wired access network, and the like.
The memory 301 may be, but is not limited to, a ROM or other type of static storage device that can store static information and instructions, a RAM or other type of dynamic storage device that can store information and instructions, an electrically erasable Programmable read-only memory (EEPROM), a compact disk read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be self-contained and coupled to the processor through a bus architecture 304. The memory may also be integral to the processor.
The memory 301 is used for storing computer-executable instructions for executing the present application, and is controlled by the processor 302 to execute. The processor 302 is configured to execute the computer-executable instructions stored in the memory 301, so as to implement a speech perception capability test method for a child using a cochlear implant according to the above-described embodiment of the present application.
Optionally, the computer-executable instructions in the embodiments of the present application may also be referred to as application program codes, which are not specifically limited in the embodiments of the present application.
The embodiment of the application provides a speech perception capability test method for a child using an artificial cochlea, wherein the method comprises the following steps: obtaining a first type masking sound and a second type masking sound, wherein the first type masking sound and the second type masking sound are different; obtaining a first masking effect of the first type of masking sound; obtaining a second masking effect of the second type of masking sound; inputting the first masking effect and the second masking effect into a masking separation model to obtain an energy masking effect and an information masking effect; inputting the energy masking effect into an energy masking speech perception capability evaluation model to obtain a first energy masking speech perception capability evaluation result; inputting the information masking effect into an information masking speech perception capability evaluation model to obtain a first information masking speech perception capability evaluation result; obtaining a first grade of the first energy masking speech perception capability assessment result; obtaining a second grade of the first information masking speech perception capability evaluation result; obtaining a first weight ratio according to the ratio of the first grade to the second grade; according to the first weight ratio, the first energy masking speech perception capability evaluation result and the first information masking speech perception capability evaluation result, obtaining a speech perception capability evaluation result of a first child, simulating a multi-person competition context by obtaining multiple masking sounds, respectively obtaining masking effects of the child using the cochlear implant on the multiple masking sounds, further dividing the masking effects into energy masking effects and information masking effects by using a masking separation model, respectively evaluating the energy masking effects and the information masking effects to obtain evaluation grades, and then evaluating the effect proportion of the energy masking effects and the information masking effects according to the weight ratio of the evaluation grades. The evaluation result can be expressed by the evaluation level and the weight ratio of the energy masking effect and the information masking effect, and the technical effect of qualitatively and quantitatively evaluating the auditory masking effect of the cochlear implant using children in a multi-person competitive context is achieved.
Those of ordinary skill in the art will understand that: the various numbers of the first, second, etc. mentioned in this application are only used for the convenience of description and are not used to limit the scope of the embodiments of this application, nor to indicate the order of precedence. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one" means one or more. At least two means two or more. "at least one," "any," or similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one (one ) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device including one or more available media integrated servers, data centers, and the like. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The various illustrative logical units and circuits described in this application may be implemented or operated upon by design of a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in the embodiments herein may be embodied directly in hardware, in a software element executed by a processor, or in a combination of the two. The software cells may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may be disposed in a terminal. In the alternative, the processor and the storage medium may reside in different components within the terminal. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the present application has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely exemplary of the present application as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the present application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations.