[go: up one dir, main page]

CN113476041A - Speech perception capability test method and system for children using artificial cochlea - Google Patents

Speech perception capability test method and system for children using artificial cochlea Download PDF

Info

Publication number
CN113476041A
CN113476041A CN202110685445.8A CN202110685445A CN113476041A CN 113476041 A CN113476041 A CN 113476041A CN 202110685445 A CN202110685445 A CN 202110685445A CN 113476041 A CN113476041 A CN 113476041A
Authority
CN
China
Prior art keywords
masking
information
speech perception
obtaining
energy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110685445.8A
Other languages
Chinese (zh)
Other versions
CN113476041B (en
Inventor
刘济生
陶朵朵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
First Affiliated Hospital of Suzhou University
Original Assignee
First Affiliated Hospital of Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by First Affiliated Hospital of Suzhou University filed Critical First Affiliated Hospital of Suzhou University
Priority to CN202110685445.8A priority Critical patent/CN113476041B/en
Publication of CN113476041A publication Critical patent/CN113476041A/en
Application granted granted Critical
Publication of CN113476041B publication Critical patent/CN113476041B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/18Applying electric currents by contact electrodes
    • A61N1/32Applying electric currents by contact electrodes alternating or intermittent currents
    • A61N1/36Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
    • A61N1/36036Applying electric currents by contact electrodes alternating or intermittent currents for stimulation of the outer, middle or inner ear
    • A61N1/36038Cochlear stimulation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/12Audiometering
    • A61B5/121Audiometering evaluating hearing capacity
    • A61B5/123Audiometering evaluating hearing capacity subjective methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Veterinary Medicine (AREA)
  • Otolaryngology (AREA)
  • Biomedical Technology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Radiology & Medical Imaging (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
  • Prostheses (AREA)

Abstract

本发明提供了一种人工耳蜗使用儿童的言语感知能力测试方法及系统,包括:获得第一类型掩蔽声和第二类型掩蔽声;获得第一和第二类型掩蔽声的第一和第二掩蔽效应;获得能量和信息掩蔽效应;能量掩蔽效应输入能量掩蔽言语感知能力评估模型,获得第一能量掩蔽言语感知能力评估结果;信息掩蔽效应输入信息掩蔽言语感知能力评估模型,获得第一信息掩蔽言语感知能力评估结果;获得第一能量掩蔽言语感知能力评估结果的第一等级;获得第一信息掩蔽言语感知能力评估结果的第二等级;获得第一权重比;获得第一儿童的言语感知能力评估结果。解决了现有技术中存在缺乏针对人工耳蜗使用儿童在多人竞争语境下的听觉掩蔽效应的定量定性评估手段的技术问题。

Figure 202110685445

The invention provides a method and system for testing the speech perception ability of children using cochlear implants, including: obtaining a first type of masking sound and a second type of masking sound; effect; obtain energy and information masking effect; energy masking effect input energy masking speech perception ability evaluation model, obtain the first energy masking speech perception ability evaluation result; information masking effect input information masking speech perception ability evaluation model, obtain first information masking speech perception ability Perceptual ability evaluation result; obtain the first level of the first energy masking speech perception evaluation result; obtain the second level of the first information masking speech perception evaluation result; obtain the first weight ratio; obtain the first child's speech perception evaluation result. The technical problem in the prior art that there is a lack of quantitative and qualitative assessment methods for the auditory masking effect of children using cochlear implants in a multi-person competition context is solved.

Figure 202110685445

Description

Speech perception capability test method and system for children using artificial cochlea
Technical Field
The invention relates to the technical field of biological simulation, in particular to a method and a system for testing speech perception capability of a child using an artificial cochlea.
Background
The artificial cochlea is an electronic device, converts sound into an electric signal in a certain coding form by an external speech processor, directly excites auditory nerves through an electrode system implanted in a human body to recover, improve and reconstruct the auditory function of a deaf person, and is the most successful biomedical engineering device applied at present.
The usage of the cochlear implant in deaf-mute children is a hot application object, and noisy scenes in daily environments require the cochlear implant to have good auditory masking effect on multi-person conversation, namely, in a multi-person competitive context, so that the evaluation of the auditory masking effect has very important significance for improving the auditory masking effect in the multi-person competitive context. The currently known technology is basically evaluated by simulating the living environment and by the feedback of the cochlear prosthesis user, but the information fed back by children is not comprehensive and has no qualitative and quantitative evaluation result.
However, in the process of implementing the technical solution of the invention in the embodiments of the present application, the inventors of the present application find that the above-mentioned technology has at least the following technical problems:
the technical problem of lacking of a quantitative and qualitative assessment means aiming at the auditory masking effect of the artificial cochlea using children in a multi-person competitive context exists in the prior art.
Disclosure of Invention
The embodiment of the application provides a method and a system for testing the speech perception ability of a child using an artificial cochlea, and solves the technical problem that a quantitative and qualitative evaluation means aiming at the auditory masking effect of the child using the artificial cochlea in a multi-person competitive context is lacked in the prior art. The method comprises the steps of simulating various contexts by acquiring various types of masking sounds, then respectively acquiring masking effects of children on the various types of masking sounds by using the cochlear implant, further dividing the masking effects into energy masking effects and information masking effects by using a masking separation model, respectively evaluating the energy masking effects and the information masking effects to obtain evaluation grades, and then evaluating the effect proportion of the energy masking effects and the information masking effects according to the weight ratio of the evaluation grades. The evaluation result can be expressed by the evaluation level and the weight ratio of the energy masking effect and the information masking effect, and the technical effect of qualitatively and quantitatively evaluating the auditory masking effect of the cochlear implant using children in a multi-person competitive context is achieved.
In view of the above problems, the present application provides a speech perception capability test method and system for a child using a cochlear implant.
In a first aspect, an embodiment of the present application provides a speech perception capability test method for a child using a cochlear implant, wherein the method includes: obtaining a first type masking sound and a second type masking sound, wherein the first type masking sound and the second type masking sound are different; obtaining a first masking effect of the first type of masking sound; obtaining a second masking effect of the second type of masking sound; inputting the first masking effect and the second masking effect into a masking separation model to obtain an energy masking effect and an information masking effect; inputting the energy masking effect into an energy masking speech perception capability evaluation model to obtain a first energy masking speech perception capability evaluation result; inputting the information masking effect into an information masking speech perception capability evaluation model to obtain a first information masking speech perception capability evaluation result; obtaining a first grade of the first energy masking speech perception capability assessment result; obtaining a second grade of the first information masking speech perception capability evaluation result; obtaining a first weight ratio according to the ratio of the first grade to the second grade; and obtaining a speech perception capability evaluation result of the first child according to the first weight ratio, the first energy masking speech perception capability evaluation result and the first information masking speech perception capability evaluation result.
In another aspect, an embodiment of the present application provides a speech perception capability test system for a child using a cochlear implant, wherein the system includes: a first obtaining unit configured to obtain a first type masking sound and a second type masking sound, wherein the first type masking sound and the second type masking sound are different; a second obtaining unit for obtaining a first masking effect of the first type masking sound; a third obtaining unit configured to obtain a second masking effect of the second type of masking sound; a fourth obtaining unit, configured to input the first masking effect and the second masking effect into a masking separation model, and obtain an energy masking effect and an information masking effect; a fifth obtaining unit, configured to input the energy masking effect into an energy masking speech perception capability evaluation model, and obtain a first energy masking speech perception capability evaluation result; a sixth obtaining unit, configured to input the information masking effect into an information masking speech perception capability evaluation model, and obtain a first information masking speech perception capability evaluation result; a seventh obtaining unit that obtains a first level of the first energy masking speech perception capability evaluation result; an eighth obtaining unit that obtains a second level of the first information masking speech perception capability evaluation result; a ninth obtaining unit that obtains a first weight ratio according to a ratio of the first rank to the second rank; a tenth obtaining unit, configured to obtain a speech perception capability evaluation result of the first child according to the first weight ratio, the first energy masking speech perception capability evaluation result, and the first information masking speech perception capability evaluation result.
In a third aspect, an embodiment of the present application provides a speech perception capability test system for a child using a cochlear implant, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method according to any one of the first aspect when executing the program.
One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
obtaining a first type of masking sound and a second type of masking sound, wherein the first type of masking sound and the second type of masking sound are different; obtaining a first masking effect of the first type of masking sound; obtaining a second masking effect of the second type of masking sound; inputting the first masking effect and the second masking effect into a masking separation model to obtain an energy masking effect and an information masking effect; inputting the energy masking effect into an energy masking speech perception capability evaluation model to obtain a first energy masking speech perception capability evaluation result; inputting the information masking effect into an information masking speech perception capability evaluation model to obtain a first information masking speech perception capability evaluation result; obtaining a first grade of the first energy masking speech perception capability assessment result; obtaining a second grade of the first information masking speech perception capability evaluation result; obtaining a first weight ratio according to the ratio of the first grade to the second grade; according to the first weight ratio, the first energy masking speech perception capability evaluation result and the first information masking speech perception capability evaluation result, obtaining a speech perception capability evaluation result of a first child, simulating a multi-person competition context by obtaining multiple masking sounds, respectively obtaining masking effects of the child using the cochlear implant on the multiple masking sounds, further dividing the masking effects into energy masking effects and information masking effects by using a masking separation model, respectively evaluating the energy masking effects and the information masking effects to obtain evaluation grades, and then evaluating the effect proportion of the energy masking effects and the information masking effects according to the weight ratio of the evaluation grades. The evaluation result can be expressed by the evaluation level and the weight ratio of the energy masking effect and the information masking effect, and the technical effect of qualitatively and quantitatively evaluating the auditory masking effect of the cochlear implant using children in a multi-person competitive context is achieved.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
FIG. 1 is a schematic flow chart illustrating a method for testing speech perception of a child using a cochlear implant according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a speech perception capability testing system for children using cochlear implants according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an exemplary electronic device according to an embodiment of the present application.
Description of reference numerals: a first obtaining unit 11, a second obtaining unit 12, a third obtaining unit 13, a fourth obtaining unit 14, a fifth obtaining unit 15, a sixth obtaining unit 16, a seventh obtaining unit 17, an eighth obtaining unit 18, a ninth obtaining unit 19, a tenth obtaining unit 20, an electronic device 300, a memory 301, a processor 302, a communication interface 303, and a bus architecture 304.
Detailed Description
The embodiment of the application provides a method and a system for testing the speech perception ability of a child using an artificial cochlea, and solves the technical problem that a quantitative and qualitative evaluation means aiming at the auditory masking effect of the child using the artificial cochlea in a multi-person competitive context is lacked in the prior art. The method comprises the steps of simulating various contexts by acquiring various types of masking sounds, then respectively acquiring masking effects of children on the various types of masking sounds by using the cochlear implant, further dividing the masking effects into energy masking effects and information masking effects by using a masking separation model, respectively evaluating the energy masking effects and the information masking effects to obtain evaluation grades, and then evaluating the effect proportion of the energy masking effects and the information masking effects according to the weight ratio of the evaluation grades. The evaluation result can be expressed by the evaluation level and the weight ratio of the energy masking effect and the information masking effect, and the technical effect of qualitatively and quantitatively evaluating the auditory masking effect of the cochlear implant using children in a multi-person competitive context is achieved.
Summary of the application
The artificial cochlea is an electronic device, converts sound into an electric signal in a certain coding form by an external speech processor, directly excites auditory nerves through an electrode system implanted in a human body to recover, improve and reconstruct the auditory function of a deaf person, and is the most successful biomedical engineering device applied at present. The usage of the cochlear implant in deaf-mute children is a hot application object, and noisy scenes in daily environments require the cochlear implant to have good auditory masking effect on multi-person conversation, namely, in a multi-person competitive context, so that the evaluation of the auditory masking effect has very important significance for improving the auditory masking effect in the multi-person competitive context. Currently, the known technology is basically to evaluate through simulating a living environment and through feedback of a cochlear prosthesis user, but information fed back by children is not comprehensive, and a qualitative and quantitative evaluation result is not obtained, but the technical problem that a quantitative and qualitative evaluation means aiming at auditory masking effect of the cochlear prosthesis using children in a multi-person competitive context is lacked exists in the prior art.
In view of the above technical problems, the technical solution provided by the present application has the following general idea:
the embodiment of the application provides a speech perception capability test method for a child using an artificial cochlea, wherein the method comprises the following steps: obtaining a first type masking sound and a second type masking sound, wherein the first type masking sound and the second type masking sound are different; obtaining a first masking effect of the first type of masking sound; obtaining a second masking effect of the second type of masking sound; inputting the first masking effect and the second masking effect into a masking separation model to obtain an energy masking effect and an information masking effect; inputting the energy masking effect into an energy masking speech perception capability evaluation model to obtain a first energy masking speech perception capability evaluation result; inputting the information masking effect into an information masking speech perception capability evaluation model to obtain a first information masking speech perception capability evaluation result; obtaining a first grade of the first energy masking speech perception capability assessment result; obtaining a second grade of the first information masking speech perception capability evaluation result; obtaining a first weight ratio according to the ratio of the first grade to the second grade; and obtaining a speech perception capability evaluation result of the first child according to the first weight ratio, the first energy masking speech perception capability evaluation result and the first information masking speech perception capability evaluation result.
Having thus described the general principles of the present application, various non-limiting embodiments thereof will now be described in detail with reference to the accompanying drawings.
Example one
As shown in fig. 1, an embodiment of the present application provides a speech perception capability test method for a child using a cochlear implant, wherein the method includes:
s100: obtaining a first type masking sound and a second type masking sound, wherein the first type masking sound and the second type masking sound are different;
specifically, noise masking refers to a phenomenon in which the threshold of a person's hearing for a sound is increased by the influence of noise, and the sound causing this phenomenon is referred to as a masking noise, that is, a masking sound as defined herein. The first type masking sound and the second type masking sound refer to masking noises under different contexts in the simulated daily life, such as steady-state background noise, dynamic noise, speech spectrum noise, multi-person speaking noise and the like; the first type of masking sound and the second type of masking sound cannot be the same type of noise.
S200: obtaining a first masking effect of the first type of masking sound;
s300: obtaining a second masking effect of the second type of masking sound;
specifically, the first masking effect refers to an amount of rise of another sound listening valve due to the first type of masking sound; the second masking effect refers to an amount of raising of the other sound listening valve due to the second type of masking sound. By way of example and not limitation: the masking effect can be evaluated using the auditory valve rise value and also using masking similarity, another sound being preferably a stationary closed phrase. The invariance of the target information ensures that the influence factors of the masking effect are only different types of masking sounds.
S400: inputting the first masking effect and the second masking effect into a masking separation model to obtain an energy masking effect and an information masking effect;
specifically, the energy masking effect refers to masking that occurs around the auditory sense, resulting from the superposition of the target signal and the masking signal on the frequency spectrum; the information masking effect refers to masking which occurs in the auditory center and is generated by the similarity of the target signal and the masking signal on the information pattern; the masking separation model refers to a neural network model capable of separating the first masking effect and the second masking effect into the energy masking effect and the information masking effect, the masking separation model is trained through a plurality of groups of masking effect information, corresponding energy masking effect identification information and information masking effect identification information, and supervised learning is finished when the output result of the masking separation model converges. The first masking effect and the second masking effect can be accurately separated from the energy masking effect and the information masking effect through the masking separation model, and evaluation variables can be reduced when the influence of different influencing factors on the information masking effect or the energy masking effect is further evaluated.
S500: inputting the energy masking effect into an energy masking speech perception capability evaluation model to obtain a first energy masking speech perception capability evaluation result;
specifically, the first energy-masking speech perception capability evaluation result is a scoring result obtained by inputting the energy masking effect into the energy-masking speech perception capability evaluation model through intelligent analysis, the energy-masking speech perception capability evaluation model is established on the basis of a neural network model and has the characteristics of the neural network model, wherein the artificial neural network is an abstract mathematical model which is proposed and developed on the basis of modern neuroscience and aims at reflecting the structure and the function of the human brain, the neural network is an operation model and is formed by a large number of nodes (or called neurons) which are connected with each other, each node represents a specific output function called an excitation function, the connection between every two nodes represents a weighted value for a signal passing through the connection, called a weight, and is equivalent to the memory of the artificial neural network, the output of the network is an expression of a logic strategy according to a network connection mode, and the energy masking speech perception capability evaluation model established based on the neural network model can output accurate information of the first energy masking speech perception capability evaluation result, so that the analysis and calculation capability is strong, and the accurate and efficient technical effect is achieved.
S600: inputting the information masking effect into an information masking speech perception capability evaluation model to obtain a first information masking speech perception capability evaluation result;
specifically, the first information masking speech perception capability evaluation result is a scoring result obtained by inputting the information masking effect into the information masking speech perception capability evaluation model through intelligent analysis, the energy masking speech perception capability evaluation model is a neural network model of the same type as the energy masking speech perception capability evaluation model, and the information masking speech perception capability evaluation model established based on the neural network model can output accurate information of the first information masking speech perception capability evaluation result, so that the information masking speech perception capability evaluation model has strong analysis and calculation capability, and an accurate and efficient technical effect is achieved.
S700: obtaining a first grade of the first energy masking speech perception capability assessment result;
s800: obtaining a second grade of the first information masking speech perception capability evaluation result;
specifically, the first class refers to a degree of masking degree of the energy masking effect of the first type masking sound and the second type masking sound on the target sound, and is obtained from the first energy masking speech perception capability evaluation result; the second grade refers to the information masking effect masking degree grade of the first type masking sound and the second type masking sound on the target sound, and can be obtained from the first information masking speech perception capability evaluation result. The energy masking effect and the information masking effect of the two types of masking sounds can be quantitatively evaluated through the first level and the second level, respectively.
S900: obtaining a first weight ratio according to the ratio of the first grade to the second grade;
s1000: and obtaining a speech perception capability evaluation result of the first child according to the first weight ratio, the first energy masking speech perception capability evaluation result and the first information masking speech perception capability evaluation result.
Specifically, the first weight ratio refers to calculating a ratio of the first level to the second level, and obtaining a ratio of the first level to the second level, preferably using a percentage for characterization, and the first weight ratio can be used to understand a ratio of the energy masking effect to the information masking effect in the context of the first type masking sound or the second type masking sound, so as to further improve the performance of the cochlear implant for researching the masking effect influence factors which can greatly influence the cochlear implant. Further, the masking effect is divided into the energy masking effect and the information masking effect to be evaluated respectively, the first energy masking speech perception capability evaluation result and the first information masking speech perception capability evaluation result are obtained, the influence degree of the first energy masking speech perception capability evaluation result and the first information masking speech perception capability evaluation result is calculated, the occupation ratio of the energy masking effect and the information masking in different contexts is obtained, the speech perception capability evaluation result of the first child is obtained, and a certain direction is provided for further research through speech perception capability evaluation of the child used by the artificial cochlea.
Further, based on the obtaining of the first masking effect of the first type of masking sound, step S300 further includes:
s310: obtaining speech intelligibility of the first type of masking sound;
s320: obtaining target sound keyword information;
s330: obtaining masking similarity according to the first type masking sound and the target sound keyword information;
s340: obtaining the first masking effect according to the language intelligibility and the masking similarity.
In particular, the language intelligibility of the first type of masking sound refers to the language clarity, i.e. the intelligibility degree, of the first type of masking sound, the higher the language intelligibility, the stronger the first masking effect; the target sound keyword information refers to a keyword for understanding a target sound, for example, if a certain closed short sentence is Zhang Sansanluncheon and rice is eaten, the name, the eating and the rice can be the target sound keyword information; the masking similarity refers to comparing the first type masking sound with the target keyword information, and determining the masking similarity according to the number of the repeated target keywords of the first type masking sound, wherein the larger the number is, the higher the masking similarity is, the better the first masking effect is. Characterizing the first masking effect by the speech intelligibility and the masking similarity facilitates a qualitative and quantitative processing of the masking effect. Furthermore, the second masking effect is determined in the same manner as described above.
Further, the method further includes step S1100:
s1110: obtaining first position information of a target sound;
s1120: obtaining second position information of the masking sound;
s1130: obtaining position difference information of the first position information and the second position information;
s1140: constructing a regression model of the spatial position and the information masking effect according to the incident angle information, the incident height information and the incident distance information;
s1150: obtaining a first space position influence parameter according to the position difference information and the regression model;
s1160: and correcting the first information masking speech perception capability evaluation result according to the first spatial position influence parameter to obtain a second information masking speech perception capability evaluation result.
Specifically, the first position information of the target sound refers to sound emission position information of the target sound; the second position information of the masking sound refers to sound emission position information of the masking sound; the position difference information of the first position information and the second position information refers to a distance between the sound emission position information of the target sound and the sound emission position information of the masking sound; the optional determination mode of the incident angle, the incident height information and the incident distance is as follows: capturing horizontal propagation time and vertical propagation time of the target sound and the masking sound by using a sound sensor, determining a horizontal incident distance and a vertical incident distance according to the propagation time, obtaining the incident distance, namely a hypotenuse, according to the pythagorean theorem, and further determining the incident angle and the incident height; further, according to the influence relationship of the incident angle, the incident height information and the incident distance on the information masking effect, a regression model of the spatial position and the information masking effect is established, and the influence relationship is determined according to a specific application example and is not limited herein; furthermore, according to the influence of the position difference information on the information masking effect, the regression model is combined to obtain the correlation between the spatial position and the information masking effect, that is, the correlation is the first spatial position influence parameter, generally speaking, under the condition that other variables are fixed, the smaller the position difference is, the smaller the first spatial position influence parameter is, and the specific numerical value is not limited herein. And correcting the first information masking speech perception capability evaluation result through the first spatial position influence parameter, wherein the correction mode is preferably that the larger the first spatial position influence parameter is, the higher the grade corresponding to the first information masking speech perception capability evaluation result is, and the corrected result is the second information masking speech perception capability evaluation result. The influence of the first spatial position on the information masking effect is added into the first information masking speech perception capability evaluation result, and the obtained second information masking speech perception capability evaluation result is more comprehensive in information, so that the evaluation result is more accurate.
Further, based on the obtaining the first position information of the target sound and the obtaining the second position information of the masking sound, the step S1140 further includes:
s1141: obtaining first incident angle information, first incident height information and first incident distance information of the target sound;
s1142: obtaining the first position information according to the first incident angle information, the first incident height information and the first incident distance information;
s1143: obtaining second incident angle information, second incident height information and second incident distance information of the masking sound;
s1144: and obtaining the second position information according to the second incident angle information, the second incident height information and the second incident distance information.
Specifically, according to first incident angle information, first incident height information, and first incident distance information of the target sound, a horizontal distance between the target sound and the child using the cochlear implant can be determined according to that the first incident angle is an acute angle, the first incident height is a right-angled side, and the first incident distance is a hypotenuse, and further, spatial position information of the target sound, that is, the first position information can be obtained. The second position information of the masking sound is preferably determined in the same manner as the first position information. The information masking effect can be evaluated qualitatively and quantitatively by quantizing the positions of the target sound and the masking sound through the first position information and the second position information.
Further, the method further includes S1200:
s1210: obtaining first time information of a target sound reaching two ears;
s1220: obtaining second time information of the masking sound reaching two ears;
s1230: obtaining time difference information of the first time information and the second time information;
s1240: obtaining first intensity information of a target sound reaching two ears;
s1250: obtaining second intensity information of the masking sound reaching two ears;
s1260: obtaining intensity difference information of the first intensity information and the second intensity information;
s1270: obtaining a first intensity difference influence parameter according to the intensity difference information;
s1280: and correcting the first information masking speech perception capability evaluation result according to the first intensity difference influence parameter to obtain a third information masking speech perception capability evaluation result.
Specifically, the first time information refers to recording the travel time of the target sound to both ears, and preferably recording the travel time by using a sound sensor; the second time information refers to recording the propagation time of the masking sound reaching the ears, and the determination mode is the same as the first time information; the first intensity information refers to the sound intensity of the target sound when reaching both ears, and the second intensity information refers to the sound intensity of the masking sound when reaching both ears, preferably using decibel representation, determined using a sound sensor. Further, a difference between the first intensity information and the second intensity information is calculated to obtain the intensity difference information. Furthermore, the larger the intensity difference is, the more obvious the influence on the masking effect is, and the first intensity difference influence parameter is obtained according to the relation. Furthermore, the first information masking speech perception capability evaluation result is corrected according to the first intensity difference influence parameter, and a third information masking speech perception capability evaluation result is obtained. The evaluation influence factors are supplemented, and the technical effect of enabling the evaluation result to be more accurate is achieved.
Further, the method further includes step S1300:
s1310: obtaining the information of the number of the masking sounds;
s1320: carrying out vector conversion on the quantity information of the masking sound to obtain a vector mode corresponding to the quantity information;
s1330: obtaining a first quantity of influence parameters according to the vector mode;
s1340: and correcting the first information masking speech perception capability evaluation result according to the first quantity influence parameter to obtain a fourth information masking speech perception capability evaluation result.
Specifically, the information on the number of masking sounds refers to the number of utterances of the masking sounds, and the masking sounds are determined by optionally using the number of different decibel intervals; the vector mode corresponding to the quantity information refers to that the quantity information of the masking sound is identified by using a vector, the quantity corresponds to the vector mode, and the direction of the masking sound corresponds to the direction of the vector; and establishing the first quantity influence parameter according to the influence of the quantity of the masking sounds on the information masking effect, wherein in general, the larger the vector modulus is, the more obvious the information masking effect is. Furthermore, the first information masking speech perception capability evaluation result is corrected according to the first quantity influence parameter to obtain a fourth information masking speech perception capability evaluation result, and the influence of the quantity of masking sounds on the information masking effect is added to the first information masking speech perception capability evaluation result through the first quantity influence parameter, so that the fourth information masking speech perception capability evaluation result is more accurate and comprehensive.
Further, based on the evaluation model for inputting the energy masking effect into the energy masking speech perception capability, a first energy masking speech perception capability evaluation result is obtained, and the steps S500 and S600 further include:
s510: training a neural network model by using an energy masking effect in historical data as a training data set until the model reaches a convergence state, and constructing an energy masking speech perception capability evaluation model;
s520: inputting the energy masking effect as input information into the energy masking speech perception capability evaluation model;
s530: and obtaining output information of the energy masking speech perception capability evaluation model, wherein the output information comprises the first energy masking speech perception capability evaluation result.
Specifically, the energy masking speech perception capability evaluation model is also a neural network model, namely a neural network model in machine learning, reflects many basic characteristics of human brain functions, and is a highly complex nonlinear dynamical learning system. The neural network model training method can continuously perform self-training learning according to training data, and train the neural network model by taking an energy masking effect in historical data as a training data set. And the energy masking speech perception capability evaluation model continuously corrects itself, and when the output information of the first intelligent evaluation model reaches a preset accuracy rate/convergence state, the supervised learning process is ended. By carrying out data training on the energy masking speech perception capability evaluation model, the energy masking speech perception capability evaluation model can process input data more accurately, and further the output first energy masking speech perception capability evaluation result information is more accurate, so that the technical effects of accurately obtaining data information and improving the intellectualization of the evaluation result are achieved. In addition, the information masking speech perception capability evaluation model is preferably constructed by the same principle.
In summary, the method and the system for testing the speech perception ability of the child using the cochlear implant provided by the embodiment of the application have the following technical effects:
1. obtaining a first type of masking sound and a second type of masking sound, wherein the first type of masking sound and the second type of masking sound are different; obtaining a first masking effect of the first type of masking sound; obtaining a second masking effect of the second type of masking sound; inputting the first masking effect and the second masking effect into a masking separation model to obtain an energy masking effect and an information masking effect; inputting the energy masking effect into an energy masking speech perception capability evaluation model to obtain a first energy masking speech perception capability evaluation result; inputting the information masking effect into an information masking speech perception capability evaluation model to obtain a first information masking speech perception capability evaluation result; obtaining a first grade of the first energy masking speech perception capability assessment result; obtaining a second grade of the first information masking speech perception capability evaluation result; obtaining a first weight ratio according to the ratio of the first grade to the second grade; according to the first weight ratio, the first energy masking speech perception capability evaluation result and the first information masking speech perception capability evaluation result, obtaining a speech perception capability evaluation result of a first child, simulating a multi-person competition context by obtaining multiple masking sounds, respectively obtaining masking effects of the child using the cochlear implant on the multiple masking sounds, further dividing the masking effects into energy masking effects and information masking effects by using a masking separation model, respectively evaluating the energy masking effects and the information masking effects to obtain evaluation grades, and then evaluating the effect proportion of the energy masking effects and the information masking effects according to the weight ratio of the evaluation grades. The evaluation result can be expressed by the evaluation level and the weight ratio of the energy masking effect and the information masking effect, and the technical effect of qualitatively and quantitatively evaluating the auditory masking effect of the cochlear implant using children in a multi-person competitive context is achieved.
2. The influence of the first spatial position influence parameter, the first intensity difference influence parameter and the first quantity influence parameter on the information masking effect is added into the first information masking speech perception capability evaluation result, so that the masking speech perception capability evaluation result is more accurate and comprehensive.
Example two
Based on the same inventive concept as the speech perception capability test method for children using cochlear implants in the foregoing embodiments, as shown in fig. 2, embodiments of the present application provide a speech perception capability test system for children using cochlear implants, wherein the system includes:
a first obtaining unit 11, configured to obtain a first type masking sound and a second type masking sound, where the first type masking sound and the second type masking sound are different;
a second obtaining unit 12, the second obtaining unit 12 being configured to obtain a first masking effect of the first type masking sound;
a third obtaining unit 13, the third obtaining unit 13 being configured to obtain a second masking effect of the second type of masking sound;
a fourth obtaining unit 14, where the fourth obtaining unit 14 is configured to input the first masking effect and the second masking effect into a masking separation model, and obtain an energy masking effect and an information masking effect;
a fifth obtaining unit 15, where the fifth obtaining unit 15 is configured to input the energy masking effect into an energy masking speech perception capability evaluation model to obtain a first energy masking speech perception capability evaluation result;
a sixth obtaining unit 16, where the sixth obtaining unit 16 is configured to input the information masking effect into an information masking speech perception capability evaluation model, and obtain a first information masking speech perception capability evaluation result;
a seventh obtaining unit 17, wherein the seventh obtaining unit 17 obtains the first level of the first energy masking speech perception capability evaluation result;
an eighth obtaining unit 18, where the eighth obtaining unit 18 obtains a second level of the first information masking speech perception capability evaluation result;
a ninth obtaining unit 19, wherein the ninth obtaining unit 19 obtains a first weight ratio according to a ratio of the first level and the second level;
a tenth obtaining unit 20, where the tenth obtaining unit 20 obtains the speech perception capability evaluation result of the first child according to the first weight ratio, the first energy masking speech perception capability evaluation result, and the first information masking speech perception capability evaluation result.
Further, the system further comprises:
an eleventh obtaining unit for obtaining language intelligibility of the first type of masking sound;
a twelfth obtaining unit configured to obtain target acoustic keyword information;
a thirteenth obtaining unit, configured to obtain a masking similarity according to the first type masking sound and the target sound keyword information;
a fourteenth obtaining unit for obtaining the first masking effect according to the speech intelligibility and the masking similarity.
Further, the system further comprises:
a fifteenth obtaining unit configured to obtain first position information of a target sound;
a sixteenth obtaining unit that obtains second position information of the masking sound;
a seventeenth obtaining unit configured to obtain position difference information of the first position information and the second position information;
an eighteenth obtaining unit, configured to construct a regression model of the spatial position and the information masking effect according to the incident angle information, the incident height information, and the incident distance information;
a nineteenth obtaining unit, configured to obtain a first spatial position influence parameter according to the position difference information and the regression model;
and the twentieth obtaining unit is used for correcting the first information masking speech perception capability evaluation result according to the first spatial position influence parameter to obtain a second information masking speech perception capability evaluation result.
Further, the system further comprises:
a twenty-first obtaining unit configured to obtain first incident angle information, first incident height information, and first incident distance information of the target sound;
a twenty-second obtaining unit configured to obtain the first position information according to the first incident angle information, the first incident height information, and the first incident distance information;
a twenty-third obtaining unit configured to obtain second incident angle information, second incident height information, and second incident distance information of the masking sound;
a twenty-fourth obtaining unit, configured to obtain the second position information according to the second incident angle information, the second incident height information, and the second incident distance information.
Further, the system further comprises:
a twenty-fifth obtaining unit, configured to obtain first time information when a target sound reaches both ears;
a twenty-sixth obtaining unit configured to obtain second time information when the masking sound reaches both ears;
a twenty-seventh obtaining unit configured to obtain time difference information of the first time information and the second time information;
a twenty-eighth obtaining unit configured to obtain first intensity information of a target sound reaching both ears;
a twenty-ninth obtaining unit configured to obtain second intensity information of the masking sound reaching both ears;
a thirtieth obtaining unit configured to obtain intensity difference information of the first intensity information and the second intensity information;
a thirty-first obtaining unit, configured to obtain a first intensity difference influence parameter according to the intensity difference information;
a thirty-second obtaining unit, configured to correct the first information masking speech perception capability evaluation result according to the first intensity difference influence parameter, and obtain a third information masking speech perception capability evaluation result.
Further, the system further comprises:
a thirty-third obtaining unit configured to obtain information on the number of masking sounds;
a thirty-fourth obtaining unit, configured to perform vector conversion on the amount information of the masking sound, and obtain a vector modulus corresponding to the amount information;
a thirty-fifth obtaining unit, configured to obtain a first number of impact parameters according to the vector norm;
and the first correcting unit is used for correcting the first information masking speech perception capability evaluation result according to the first quantity influence parameter to obtain a fourth information masking speech perception capability evaluation result.
Further, the system further comprises:
the first construction unit is used for training the neural network model by taking an energy masking effect in historical data as a training data set until the model reaches a convergence state, and constructing the energy masking speech perception capability evaluation model;
a first input unit for inputting the energy masking effect as input information into the energy masking speech perception capability evaluation model;
a first output unit, configured to obtain output information of the energy masking speech perception capability assessment model, where the output information includes a result of the first energy masking speech perception capability assessment.
Exemplary electronic device
The electronic device of the embodiment of the present application is described below with reference to figure 3,
based on the same inventive concept as the speech perception capability test method for the child using the cochlear implant in the foregoing embodiment, the embodiment of the present application further provides a speech perception capability test system for the child using the cochlear implant, comprising: a processor coupled to a memory for storing a program that, when executed by the processor, causes a system to perform the method of any of the first aspects
The electronic device 300 includes: processor 302, communication interface 303, memory 301. Optionally, the electronic device 300 may also include a bus architecture 304. Wherein, the communication interface 303, the processor 302 and the memory 301 may be connected to each other through a bus architecture 304; the bus architecture 304 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus architecture 304 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 3, but this does not mean only one bus or one type of bus.
Processor 302 may be a CPU, microprocessor, ASIC, or one or more integrated circuits for controlling the execution of programs in accordance with the teachings of the present application.
The communication interface 303 may be any device, such as a transceiver, for communicating with other devices or communication networks, such as an ethernet, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), a wired access network, and the like.
The memory 301 may be, but is not limited to, a ROM or other type of static storage device that can store static information and instructions, a RAM or other type of dynamic storage device that can store information and instructions, an electrically erasable Programmable read-only memory (EEPROM), a compact disk read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be self-contained and coupled to the processor through a bus architecture 304. The memory may also be integral to the processor.
The memory 301 is used for storing computer-executable instructions for executing the present application, and is controlled by the processor 302 to execute. The processor 302 is configured to execute the computer-executable instructions stored in the memory 301, so as to implement a speech perception capability test method for a child using a cochlear implant according to the above-described embodiment of the present application.
Optionally, the computer-executable instructions in the embodiments of the present application may also be referred to as application program codes, which are not specifically limited in the embodiments of the present application.
The embodiment of the application provides a speech perception capability test method for a child using an artificial cochlea, wherein the method comprises the following steps: obtaining a first type masking sound and a second type masking sound, wherein the first type masking sound and the second type masking sound are different; obtaining a first masking effect of the first type of masking sound; obtaining a second masking effect of the second type of masking sound; inputting the first masking effect and the second masking effect into a masking separation model to obtain an energy masking effect and an information masking effect; inputting the energy masking effect into an energy masking speech perception capability evaluation model to obtain a first energy masking speech perception capability evaluation result; inputting the information masking effect into an information masking speech perception capability evaluation model to obtain a first information masking speech perception capability evaluation result; obtaining a first grade of the first energy masking speech perception capability assessment result; obtaining a second grade of the first information masking speech perception capability evaluation result; obtaining a first weight ratio according to the ratio of the first grade to the second grade; according to the first weight ratio, the first energy masking speech perception capability evaluation result and the first information masking speech perception capability evaluation result, obtaining a speech perception capability evaluation result of a first child, simulating a multi-person competition context by obtaining multiple masking sounds, respectively obtaining masking effects of the child using the cochlear implant on the multiple masking sounds, further dividing the masking effects into energy masking effects and information masking effects by using a masking separation model, respectively evaluating the energy masking effects and the information masking effects to obtain evaluation grades, and then evaluating the effect proportion of the energy masking effects and the information masking effects according to the weight ratio of the evaluation grades. The evaluation result can be expressed by the evaluation level and the weight ratio of the energy masking effect and the information masking effect, and the technical effect of qualitatively and quantitatively evaluating the auditory masking effect of the cochlear implant using children in a multi-person competitive context is achieved.
Those of ordinary skill in the art will understand that: the various numbers of the first, second, etc. mentioned in this application are only used for the convenience of description and are not used to limit the scope of the embodiments of this application, nor to indicate the order of precedence. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one" means one or more. At least two means two or more. "at least one," "any," or similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one (one ) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device including one or more available media integrated servers, data centers, and the like. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The various illustrative logical units and circuits described in this application may be implemented or operated upon by design of a general purpose processor, a digital signal processor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, or any other similar configuration.
The steps of a method or algorithm described in the embodiments herein may be embodied directly in hardware, in a software element executed by a processor, or in a combination of the two. The software cells may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. For example, a storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC, which may be disposed in a terminal. In the alternative, the processor and the storage medium may reside in different components within the terminal. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the present application has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely exemplary of the present application as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the present application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations.

Claims (9)

1.一种人工耳蜗使用儿童的言语感知能力测试方法,其中,所述方法包括:1. a cochlear implant uses a child's speech perception ability test method, wherein, the method comprises: 获得第一类型掩蔽声和第二类型掩蔽声,其中,所述第一类型掩蔽声和所述第二类型掩蔽声不同;obtaining a first type of masker and a second type of masker, wherein the first type of masker and the second type of masker are different; 获得所述第一类型掩蔽声的第一掩蔽效应;obtaining a first masking effect of the first type of masking sound; 获得所述第二类型掩蔽声的第二掩蔽效应;obtaining a second masking effect of the second type of masking sound; 将所述第一掩蔽效应和所述第二掩蔽效应输入掩蔽分离模型,获得能量掩蔽效应和信息掩蔽效应;Inputting the first masking effect and the second masking effect into a masking separation model to obtain an energy masking effect and an information masking effect; 将所述能量掩蔽效应输入能量掩蔽言语感知能力评估模型,获得第一能量掩蔽言语感知能力评估结果;Inputting the energy masking effect into an energy masking speech perception evaluation model to obtain a first energy masking speech perception evaluation result; 将所述信息掩蔽效应输入信息掩蔽言语感知能力评估模型,获得第一信息掩蔽言语感知能力评估结果;Inputting the information masking effect into an information masking speech perception evaluation model to obtain a first information masking speech perception evaluation result; 获得所述第一能量掩蔽言语感知能力评估结果的第一等级;obtaining the first level of the evaluation result of the first energy-masked speech perception ability; 获得所述第一信息掩蔽言语感知能力评估结果的第二等级;obtaining a second level of the evaluation result of the first information masking speech perception ability; 根据所述第一等级和所述第二等级的比值,获得第一权重比;obtaining a first weight ratio according to the ratio of the first level and the second level; 根据所述第一权重比、所述第一能量掩蔽言语感知能力评估结果和所述第一信息掩蔽言语感知能力评估结果,获得第一儿童的言语感知能力评估结果。According to the first weight ratio, the first energy-masked speech perception capability evaluation result, and the first information-masked speech perception capability evaluation result, the first child's speech perception capability evaluation result is obtained. 2.如权利要求1所述的方法,其中,所述获得所述第一类型掩蔽声的第一掩蔽效应,包括:2. The method of claim 1, wherein the obtaining a first masking effect of the first type of masking sound comprises: 获得所述第一类型掩蔽声的语言可懂度;obtaining the language intelligibility of the first type of masker; 获得目标声关键词信息;Obtain target acoustic keyword information; 根据所述第一类型掩蔽声和所述目标声关键词信息,获得掩蔽相似度;obtaining a masking similarity according to the first type of masking sound and the target sound keyword information; 根据所述语言可懂度和所述掩蔽相似度,获得所述第一掩蔽效应。The first masking effect is obtained from the language intelligibility and the masking similarity. 3.如权利要求1所述的方法,其中,所述方法包括:3. The method of claim 1, wherein the method comprises: 获得目标声的第一位置信息;Obtain the first position information of the target sound; 获得掩蔽声的第二位置信息;obtain the second position information of the masking sound; 获得所述第一位置信息和所述第二位置信息的位置差信息;obtaining position difference information of the first position information and the second position information; 根据入射角度信息、入射高度信息和入射距离信息构建空间位置与所述信息掩蔽效应的回归模型;Constructing a regression model of the spatial position and the information masking effect according to the incident angle information, the incident height information and the incident distance information; 根据所述位置差信息和所述回归模型,获得第一空间位置影响参数;obtaining a first spatial position influence parameter according to the position difference information and the regression model; 根据所述第一空间位置影响参数,对所述第一信息掩蔽言语感知能力评估结果进行修正,获得第二信息掩蔽言语感知能力评估结果。According to the first spatial position influence parameter, the first information masked speech perception ability evaluation result is modified to obtain the second information masked speech perception ability evaluation result. 4.如权利要求3所述的方法,其中,所述获得目标声的第一位置信息和获得掩蔽声的第二位置信息,包括:4. The method of claim 3, wherein obtaining the first position information of the target sound and obtaining the second position information of the masking sound comprises: 获得所述目标声的第一入射角度信息、第一入射高度信息和第一入射距离信息;obtaining first incident angle information, first incident height information and first incident distance information of the target sound; 根据所述第一入射角度信息、第一入射高度信息和第一入射距离信息,获得所述第一位置信息;obtaining the first position information according to the first incident angle information, the first incident height information and the first incident distance information; 获得所述掩蔽声的第二入射角度信息、第二入射高度信息和第二入射距离信息;obtaining second incident angle information, second incident height information and second incident distance information of the masking sound; 根据所述第二入射角度信息、第二入射高度信息和第二入射距离信息,获得所述第二位置信息。The second position information is obtained according to the second incident angle information, the second incident height information, and the second incident distance information. 5.如权利要求1所述的方法,其中,所述方法包括:5. The method of claim 1, wherein the method comprises: 获得目标声到达双耳的第一时间信息;Obtain the first time information when the target sound reaches both ears; 获得掩蔽声到达双耳的第二时间信息;Obtain the second time information of the masking sound reaching both ears; 获得所述第一时间信息和所述第二时间信息的时间差信息;obtaining time difference information between the first time information and the second time information; 获得目标声到达双耳的第一强度信息;Obtain the first intensity information of the target sound reaching both ears; 获得掩蔽声到达双耳的第二强度信息;Obtain the second intensity information of the masking sound reaching both ears; 获得所述第一强度信息和所述第二强度信息的强度差信息;obtaining intensity difference information of the first intensity information and the second intensity information; 根据所述强度差信息,获得第一强度差影响参数;obtaining a first intensity difference influence parameter according to the intensity difference information; 根据所述第一强度差影响参数,对所述第一信息掩蔽言语感知能力评估结果进行修正,获得第三信息掩蔽言语感知能力评估结果。According to the first intensity difference influence parameter, the evaluation result of the first information masked speech perception ability is modified to obtain the third information masked speech perception ability evaluation result. 6.如权利要求1所述的方法,其中,所述方法包括:6. The method of claim 1, wherein the method comprises: 获得掩蔽声的数量信息;Get the quantity information of the masker; 将所述掩蔽声的数量信息进行向量转换,获得所述数量信息相对应的向量模;performing vector conversion on the quantity information of the masker to obtain a vector modulus corresponding to the quantity information; 根据所述向量模,获得第一数量影响参数;obtaining a first quantity of influence parameters according to the vector modulus; 根据所述第一数量影响参数,对所述第一信息掩蔽言语感知能力评估结果进行修正,获得第四信息掩蔽言语感知能力评估结果。According to the first quantitative influence parameter, the first information masked speech perception ability evaluation result is modified to obtain a fourth information masked speech perception ability evaluation result. 7.如权利要求1所述的方法,其中,所述将所述能量掩蔽效应输入能量掩蔽言语感知能力评估模型,获得第一能量掩蔽言语感知能力评估结果,包括:7. The method of claim 1 , wherein the inputting the energy masking effect into an energy masking speech perception evaluation model to obtain a first energy masking speech perception evaluation result, comprising: 通过历史数据中的能量掩蔽效应作为训练数据集对神经网络模型进行训练,直至模型达到收敛状态,构建所述能量掩蔽言语感知能力评估模型;The neural network model is trained by using the energy masking effect in the historical data as a training data set until the model reaches a convergence state, and the energy masking speech perception evaluation model is constructed; 将所述能量掩蔽效应作为输入信息输入所述能量掩蔽言语感知能力评估模型;Inputting the energy masking effect as input information into the energy masking speech perception evaluation model; 获得所述能量掩蔽言语感知能力评估模型的输出信息,所述输出信息包括所述第一能量掩蔽言语感知能力评估结果。Obtain output information of the energy-masked speech perception evaluation model, where the output information includes the first energy-masked speech perception evaluation result. 8.一种人工耳蜗使用儿童的言语感知能力测试系统,其中,所述系统包括:8. A speech perception ability testing system for children using a cochlear implant, wherein the system comprises: 第一获得单元,所述第一获得单元用于获得第一类型掩蔽声和第二类型掩蔽声,其中,所述第一类型掩蔽声和所述第二类型掩蔽声不同;a first obtaining unit for obtaining a first type of masking sound and a second type of masking sound, wherein the first type of masking sound and the second type of masking sound are different; 第二获得单元,所述第二获得单元用于获得所述第一类型掩蔽声的第一掩蔽效应;a second obtaining unit, the second obtaining unit is used to obtain the first masking effect of the first type of masking sound; 第三获得单元,所述第三获得单元用于获得所述第二类型掩蔽声的第二掩蔽效应;a third obtaining unit, the third obtaining unit is used to obtain the second masking effect of the second type of masking sound; 第四获得单元,所述第四获得单元用于将所述第一掩蔽效应和所述第二掩蔽效应输入掩蔽分离模型,获得能量掩蔽效应和信息掩蔽效应;a fourth obtaining unit, the fourth obtaining unit is configured to input the first masking effect and the second masking effect into a masking separation model to obtain an energy masking effect and an information masking effect; 第五获得单元,所述第五获得单元用于将所述能量掩蔽效应输入能量掩蔽言语感知能力评估模型,获得第一能量掩蔽言语感知能力评估结果;a fifth obtaining unit, where the fifth obtaining unit is configured to input the energy masking effect into an energy masking speech perception evaluation model to obtain a first energy masking speech perception evaluation result; 第六获得单元,所述第六获得单元用于将所述信息掩蔽效应输入信息掩蔽言语感知能力评估模型,获得第一信息掩蔽言语感知能力评估结果;a sixth obtaining unit, where the sixth obtaining unit is configured to input the information masking effect into an information masking speech perception evaluation model, and obtain a first information masking speech perception evaluation result; 第七获得单元,所述第七获得单元获得所述第一能量掩蔽言语感知能力评估结果的第一等级;a seventh obtaining unit, wherein the seventh obtaining unit obtains the first level of the evaluation result of the first energy-masked speech perception ability; 第八获得单元,所述第八获得单元获得所述第一信息掩蔽言语感知能力评估结果的第二等级;an eighth obtaining unit, the eighth obtaining unit obtains the second level of the evaluation result of the first information masking speech perception ability; 第九获得单元,所述第九获得单元根据所述第一等级和所述第二等级的比值,获得第一权重比;a ninth obtaining unit, wherein the ninth obtaining unit obtains a first weight ratio according to the ratio of the first grade and the second grade; 第十获得单元,所述第十获得单元根据所述第一权重比、所述第一能量掩蔽言语感知能力评估结果和所述第一信息掩蔽言语感知能力评估结果,获得第一儿童的言语感知能力评估结果。A tenth obtaining unit, the tenth obtaining unit obtains the speech perception of the first child according to the first weight ratio, the evaluation result of the first energy-masked speech perception ability, and the evaluation result of the first information-masked speech perception ability Competency assessment results. 9.一种智慧药房服务评估系统,包括:处理器,所述处理器与存储器耦合,所述存储器用于存储程序,当所述程序被所述处理器执行时,使系统以执行如权利要求1至7任一项所述的方法。9. A smart pharmacy service evaluation system, comprising: a processor, the processor is coupled with a memory, the memory is used to store a program, when the program is executed by the processor, the system is executed as claimed in claim The method of any one of 1 to 7.
CN202110685445.8A 2021-06-21 2021-06-21 A method and system for testing speech perception ability of children using cochlear implants Active CN113476041B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110685445.8A CN113476041B (en) 2021-06-21 2021-06-21 A method and system for testing speech perception ability of children using cochlear implants

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110685445.8A CN113476041B (en) 2021-06-21 2021-06-21 A method and system for testing speech perception ability of children using cochlear implants

Publications (2)

Publication Number Publication Date
CN113476041A true CN113476041A (en) 2021-10-08
CN113476041B CN113476041B (en) 2023-09-19

Family

ID=77935565

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110685445.8A Active CN113476041B (en) 2021-06-21 2021-06-21 A method and system for testing speech perception ability of children using cochlear implants

Country Status (1)

Country Link
CN (1) CN113476041B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040064066A1 (en) * 2000-05-19 2004-04-01 John Michael S. System and method for objective evaluation of hearing using auditory steady-state responses
CN101783998A (en) * 2008-12-22 2010-07-21 奥迪康有限公司 A method of operating a hearing instrument based on an estimation of present cognitive load of a user and a hearing aid system
CN103325383A (en) * 2012-03-23 2013-09-25 杜比实验室特许公司 Audio processing method and audio processing device
JP2015057621A (en) * 2013-09-14 2015-03-26 ヤマハ株式会社 Evaluation device and evaluation method for voice masking
CN107346664A (en) * 2017-06-22 2017-11-14 河海大学常州校区 A kind of ears speech separating method based on critical band
US20180160984A1 (en) * 2016-12-13 2018-06-14 Stefan Jozef Mauger Speech production and the management/prediction of hearing loss
US10043527B1 (en) * 2015-07-17 2018-08-07 Digimarc Corporation Human auditory system modeling with masking energy adaptation
CN110338811A (en) * 2018-04-08 2019-10-18 苏州至听听力科技有限公司 For the hearing test apparatus and computer-readable medium under complicated acoustical signal background
CN111239686A (en) * 2020-02-18 2020-06-05 中国科学院声学研究所 Dual-channel sound source positioning method based on deep learning
CN112584275A (en) * 2019-09-29 2021-03-30 深圳Tcl新技术有限公司 Sound field expansion method, computer equipment and computer readable storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040064066A1 (en) * 2000-05-19 2004-04-01 John Michael S. System and method for objective evaluation of hearing using auditory steady-state responses
CN101783998A (en) * 2008-12-22 2010-07-21 奥迪康有限公司 A method of operating a hearing instrument based on an estimation of present cognitive load of a user and a hearing aid system
CN103325383A (en) * 2012-03-23 2013-09-25 杜比实验室特许公司 Audio processing method and audio processing device
US20150104022A1 (en) * 2012-03-23 2015-04-16 Dolby Laboratories Licensing Corporation Audio Processing Method and Audio Processing Apparatus
JP2015057621A (en) * 2013-09-14 2015-03-26 ヤマハ株式会社 Evaluation device and evaluation method for voice masking
US10043527B1 (en) * 2015-07-17 2018-08-07 Digimarc Corporation Human auditory system modeling with masking energy adaptation
US20180160984A1 (en) * 2016-12-13 2018-06-14 Stefan Jozef Mauger Speech production and the management/prediction of hearing loss
CN107346664A (en) * 2017-06-22 2017-11-14 河海大学常州校区 A kind of ears speech separating method based on critical band
CN110338811A (en) * 2018-04-08 2019-10-18 苏州至听听力科技有限公司 For the hearing test apparatus and computer-readable medium under complicated acoustical signal background
CN112584275A (en) * 2019-09-29 2021-03-30 深圳Tcl新技术有限公司 Sound field expansion method, computer equipment and computer readable storage medium
CN111239686A (en) * 2020-02-18 2020-06-05 中国科学院声学研究所 Dual-channel sound source positioning method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DUO-DUO, TAO ET AL: "Effects of age and duration of deafness on Mandarin speech understanding in competing speech by normal-hearing and cochlear implant children", 《J ACOUST SOC AM.》, vol. 144, no. 2, XP012230980, DOI: 10.1121/1.5051051 *
叶菲 刘济生 等: "对侧助听器对人工耳蜗使用者言语感知贡献研究", 《中华耳科学杂志》, vol. 18, no. 1 *

Also Published As

Publication number Publication date
CN113476041B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
KR102216160B1 (en) Apparatus and method for diagnosing disease that causes voice and swallowing disorders
CN102227240B (en) Toy exhibiting bonding behaviour
CN108962237A (en) Mixing voice recognition methods, device and computer readable storage medium
CN106875940B (en) Machine self-learning construction knowledge graph training method based on neural network
Shahamiri et al. Real-time frequency-based noise-robust Automatic Speech Recognition using Multi-Nets Artificial Neural Networks: A multi-views multi-learners approach
CN105741832A (en) Spoken language evaluation method based on deep learning and spoken language evaluation system
US10832660B2 (en) Method and device for processing whispered speech
US10026395B1 (en) Methods and systems for extracting auditory features with neural networks
CN114305325A (en) Emotion detection method and device
CN114169505A (en) Method and system for training yield prediction model and related equipment
CN108538301B (en) Intelligent digital musical instrument based on neural network audio technology
CN112562716A (en) Voice enhancement method, device, terminal and medium based on neural network
JPWO2017146073A1 (en) Voice quality conversion device, voice quality conversion method and program
WO2004075162A2 (en) Method apparatus and system for processing acoustic signals
CN108958037B (en) Wavelet fuzzy brain emotion learning control method, device, equipment and storage medium
CN106997484A (en) A kind of method and device for optimizing user credit model modeling process
CN113345464A (en) Voice extraction method, system, device and storage medium
KR20210060146A (en) Method and apparatus for processing data using deep neural network model, method and apparatus for trining deep neural network model
KR102449840B1 (en) Method and apparatus for user adaptive speech recognition
CN113476041A (en) Speech perception capability test method and system for children using artificial cochlea
CN109473122A (en) Mood analysis method, device and terminal device based on detection model
CN110991294B (en) Face action unit recognition method and system capable of being quickly constructed
US11887615B2 (en) Method and device for transparent processing of music
CN111310823A (en) Object classification method, device and electronic system
Lai et al. Adaptive Wiener gain to improve sound quality on nonnegative matrix factorization-based noise reduction system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant