CN117012212A - Artificial cochlea sound signal coding method, processor, medium and artificial cochlea - Google Patents
Artificial cochlea sound signal coding method, processor, medium and artificial cochlea Download PDFInfo
- Publication number
- CN117012212A CN117012212A CN202310761240.2A CN202310761240A CN117012212A CN 117012212 A CN117012212 A CN 117012212A CN 202310761240 A CN202310761240 A CN 202310761240A CN 117012212 A CN117012212 A CN 117012212A
- Authority
- CN
- China
- Prior art keywords
- frequency
- low
- mapping
- sound signal
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 114
- 230000005236 sound signal Effects 0.000 title claims abstract description 112
- 210000003477 cochlea Anatomy 0.000 title claims abstract description 69
- 239000007943 implant Substances 0.000 claims abstract description 63
- 230000000638 stimulation Effects 0.000 claims abstract description 22
- 238000013507 mapping Methods 0.000 claims description 87
- 238000001228 spectrum Methods 0.000 claims description 50
- 230000008569 process Effects 0.000 claims description 42
- 238000001914 filtration Methods 0.000 claims description 24
- 238000005070 sampling Methods 0.000 claims description 20
- 238000000605 extraction Methods 0.000 claims description 16
- 238000012163 sequencing technique Methods 0.000 claims description 6
- 238000004891 communication Methods 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 abstract description 20
- 238000004422 calculation algorithm Methods 0.000 abstract description 11
- 238000005516 engineering process Methods 0.000 abstract description 7
- 238000010586 diagram Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 210000000860 cochlear nerve Anatomy 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 206010011891 Deafness neurosensory Diseases 0.000 description 1
- 208000009966 Sensorineural Hearing Loss Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000005237 high-frequency sound signal Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 208000023573 sensorineural hearing loss disease Diseases 0.000 description 1
- 230000004936 stimulating effect Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N1/00—Electrotherapy; Circuits therefor
- A61N1/18—Applying electric currents by contact electrodes
- A61N1/32—Applying electric currents by contact electrodes alternating or intermittent currents
- A61N1/36—Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
- A61N1/36036—Applying electric currents by contact electrodes alternating or intermittent currents for stimulation of the outer, middle or inner ear
- A61N1/36038—Cochlear stimulation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/26—Pre-filtering or post-filtering
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Otolaryngology (AREA)
- Multimedia (AREA)
- Acoustics & Sound (AREA)
- Human Computer Interaction (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Signal Processing (AREA)
- Computational Linguistics (AREA)
- Radiology & Medical Imaging (AREA)
- Veterinary Medicine (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Biomedical Technology (AREA)
- Prostheses (AREA)
Abstract
The application provides an artificial cochlea sound signal coding method, a processor, a medium and an artificial cochlea. According to the application, different coding methods are respectively applied to the high-frequency characteristic part and the low-frequency characteristic part in the acquired sound signal, so that the electrode channel of the artificial cochlea implant can generate the stimulation current which simultaneously covers the low-frequency domain and the high-frequency domain, the problems of low definition and low intelligibility of the existing artificial cochlea technology and easiness in influence of the electric signal on the signal are solved, and the performance of the artificial cochlea sound signal coding algorithm in low-power consumption and low-calculation-force signal processing equipment is improved.
Description
Technical Field
The application relates to the field of artificial cochlea signal processing, in particular to an artificial cochlea sound signal coding method, a processor, a medium and an artificial cochlea.
Background
Cochlear implant technology is the only effective solution currently accepted worldwide to restore hearing to bilateral severe or extremely severe sensorineural deafness patients. For most mainstream cochlear implant systems, external sound is firstly collected by a microphone and converted into an electric signal, and then is further subjected to pre-emphasis, compression, noise reduction and coding treatment and then is transmitted into a body through a transmitting coil mounted behind the ear, and after the receiving coil of an implant in the body senses the signal, the received electric signal is subjected to decoding treatment through a decoding chip, so that a current stimulus is generated by a stimulating electrode of the implant, and finally hearing is assisted by auditory nerves of a patient.
In the artificial cochlea working process, the sound coding algorithm is an important factor for determining whether the artificial hearing reconstruction effect is ideal or not, but in clinical experiments, the process of hearing reconstruction by adopting the sound coding algorithm is easy to receive the limit of the performance and the power consumption of the signal processing equipment, and the coded signal has larger loss in both time resolution and frequency resolution. The CIS speech coding scheme commonly adopted in the prior art improves the problem of power plant interference caused by simultaneous electrode stimulation in the traditional Compressed Analog (CA) scheme through a strategy of sampling at successive intervals, and adopts a higher stimulation rate to obtain better time domain expression information. The ACE speech coding scheme adopts a waveform coding scheme, which improves the problem of insufficient frequency resolution in the CIS scheme and adopts several channels with maximum energy selected from sub-band energy for stimulation, but has the disadvantage.
In the prior art, the definition and the intelligibility of the reconstruction of the voice signal are lower, the time resolution of the signal is easy to be influenced by the electric signal, and the problems are particularly obvious in low-power consumption and low-computation-power signal processing equipment. Therefore, in order to obtain a better hearing effect, it is necessary to extract the features of the speech signal as effectively as possible and to perfect the speech coding strategy in order to provide a better wearing experience for the patient.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, the present application aims to provide a method, a processor, a medium and a cochlear implant for encoding a cochlear implant sound signal, which are used for solving the problems of low definition and low intelligibility of the conventional cochlear implant technology and the problem that the signal is susceptible to the electric signal, and improving the performance of the cochlear implant sound signal encoding algorithm in the signal processing equipment with low power consumption and low computational power.
To achieve the above and other related objects, a first aspect of the present application provides a method for encoding a cochlear implant sound signal, which is applied to a cochlear implant signal processor; the method comprises the following steps: collecting sound signals; extracting low-frequency characteristics and high-frequency characteristics of the collected sound signals; performing interval mapping operation on the extracted low-frequency characteristics and the extracted high-frequency characteristics based on corresponding preset parameters respectively to obtain a low-frequency mapping interval and a high-frequency mapping interval respectively; and combining and outputting the low-frequency mapping interval and the high-frequency mapping interval so as to enable the electrode channel of the artificial cochlea implant to generate the stimulation current covering the low frequency domain and the high frequency domain.
In some embodiments of the first aspect of the present application, the process of extracting low frequency features from the collected sound signal includes: performing full-wave rectification operation on the collected sound signals; filtering the sound signal subjected to full-wave rectification operation to obtain low-frequency signals of a plurality of channels; and carrying out downsampling operation on the low-frequency signal according to a preset stimulation rate so as to obtain the low-frequency characteristic of the sound signal.
In some embodiments of the first aspect of the present application, the sound signal subjected to the full-wave rectification operation is subjected to a filtering operation using a low-pass filter.
In some embodiments of the first aspect of the present application, the process of extracting high-frequency features of the collected sound signal includes: filtering the collected sound signals through a preset filter bank to obtain a high-frequency complex frequency spectrum; calculating the energy value of the high-frequency complex spectrum to obtain a high-frequency energy spectrum; carrying out channel combination operation according to the high-frequency energy spectrum and outputting characteristic energy spectrums of a plurality of sub-bands; and sequencing the characteristic energy spectrums of the plurality of sub-bands from large to small, reserving characteristic energy spectrum values of a plurality of sub-bands with the front sequencing in the energy spectrums, and setting the characteristic energy spectrum values of the rest sub-bands to be null so as to generate and obtain high-frequency characteristics.
In some embodiments of the first aspect of the present application, the predetermined filter bank includes a WOLA filter bank.
In some embodiments of the first aspect of the present application, the process of performing the interval mapping operation on the extracted low frequency feature and the high frequency feature respectively includes: performing mapping window amplitude limiting operation on the low-frequency characteristic and the high-frequency characteristic of the input mapping window by adopting a plurality of groups of threshold parameters; the mapping window is a group of threshold parameters comprising a maximum value parameter and a minimum value parameter; and mapping the low-frequency characteristic and the high-frequency characteristic distribution subjected to the mapping window amplitude limiting operation into a current energy level interval in a preset range to obtain the time sequence information of the low-frequency characteristic and the time sequence information of the high-frequency characteristic.
In some embodiments of the first aspect of the present application, the process of performing the interval mapping operation by the high frequency feature includes, after obtaining the timing information of the high frequency feature and before outputting, further performing the following steps: and performing up-sampling operation on the time sequence information of the high-frequency characteristic in the current energy level interval distributed and mapped in the preset range, and aligning the time sequence information of the high-frequency characteristic with the time sequence information of the low-frequency characteristic so as to combine and output the time sequence information of the low-frequency characteristic and the time sequence information of the high-frequency characteristic.
To achieve the above and other related objects, a second aspect of the present application provides a cochlear implant signal processor for establishing communication connection with a cochlear implant, the cochlear implant signal processor comprising: the signal acquisition module: for collecting sound signals; and the feature extraction module is used for: the method is used for extracting low-frequency characteristics and high-frequency characteristics of the collected sound signals; an interval mapping module: the method comprises the steps of performing interval mapping operation on extracted low-frequency features and high-frequency features based on corresponding preset parameters respectively to obtain a low-frequency mapping interval and a high-frequency mapping interval respectively; the signal output module: the electrode channel is used for combining and outputting the low-frequency mapping interval and the high-frequency mapping interval so as to generate stimulation current covering a low frequency domain and a high frequency domain for the electrode channel of the artificial cochlea implant.
To achieve the above and other related objects, a third aspect of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method.
To achieve the above and other related objects, a fourth aspect of the present application provides a cochlear implant comprising a cochlear implant signal processor and a cochlear implant as provided in the second aspect of the present application; wherein the artificial cochlea signal processor establishes communication connection with the artificial cochlea implant
As described above, the present application relates to the field of cochlear implant signal processing, and in particular, to a cochlear implant sound signal encoding method, a processor, a medium, and a cochlear implant. The application has the following beneficial effects: different coding methods are respectively applied to the high-frequency characteristic part and the low-frequency characteristic part in the acquired sound signals, so that the definition and the intelligibility of the existing artificial cochlea equipment are improved, the anti-interference performance of the artificial cochlea is enhanced, and the expression of the artificial cochlea sound signal coding algorithm in the signal processing equipment with low power consumption and low calculation power in the prior art is improved.
Drawings
Fig. 1 is a schematic flow chart of an embodiment of an artificial cochlea sound signal encoding method according to the present application.
Fig. 2A is a schematic flow chart of an embodiment of the method for encoding an artificial cochlea sound signal according to the present application.
Fig. 2B is a schematic flow chart of an embodiment of the method for encoding an artificial cochlea sound signal according to the present application.
Fig. 3A is a schematic flow chart of an embodiment of the method for encoding an artificial cochlea sound signal according to the present application.
Fig. 3B is a schematic flow chart of an embodiment of the method for encoding an artificial cochlea sound signal according to the present application.
Fig. 4A is a schematic flow chart of an embodiment of the method for encoding an artificial cochlea sound signal according to the present application.
Fig. 4B is a schematic flow chart of an embodiment of the method for encoding an artificial cochlea sound signal according to the present application.
Fig. 5 is a schematic diagram of the differential mapping in the embodiment of the method for encoding an artificial cochlea sound signal according to the present application.
Fig. 6 is a schematic diagram of interpolation of high-frequency characteristics in an embodiment of an artificial cochlea sound signal encoding method according to the present application.
Fig. 7 is a schematic diagram of an embodiment of an artificial cochlea sound signal encoding method according to the present application.
Fig. 8 is a schematic flow chart of an embodiment of the method for encoding an artificial cochlea sound signal according to the present application.
Fig. 9 is a time domain waveform diagram of an input signal in an embodiment of a method for encoding an artificial cochlea sound signal according to the present application.
Fig. 10 is a waveform diagram of a sub-signal processed by a band-pass filter set and output according to an embodiment of the method for encoding an artificial cochlea sound signal according to the present application.
Fig. 11 is an envelope diagram of a sub-signal output by a band-pass filter bank in an embodiment of the method for encoding an artificial cochlea sound signal according to the present application.
Fig. 12 is a diagram of an electrode channel of an output result of a cochlear implant sound signal processor after performing an encoding operation by a cochlear implant sound encoding method in an embodiment of the cochlear implant sound signal encoding method of the present application.
Detailed Description
Other advantages and effects of the present application will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present application with reference to specific examples. The application may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present application. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
In the following description, reference is made to the accompanying drawings, which illustrate several embodiments of the application. It is to be understood that other embodiments may be utilized and that mechanical, structural, electrical, and operational changes may be made without departing from the spirit and scope of the present application. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present application is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. Spatially relative terms, such as "upper," "lower," "left," "right," "lower," "upper," and the like, may be used herein to facilitate a description of one element or feature as illustrated in the figures as being related to another element or feature.
In the present application, unless explicitly specified and limited otherwise, the terms "mounted," "connected," "secured," "held," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art according to the specific circumstances.
Furthermore, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including" specify the presence of stated features, operations, elements, components, items, categories, and/or groups, but do not preclude the presence, presence or addition of one or more other features, operations, elements, components, items, categories, and/or groups. The terms "or" and/or "as used herein are to be construed as inclusive, or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; A. b and C). An exception to this definition will occur only when a combination of elements, functions or operations are in some way inherently mutually exclusive.
In order to solve the problems in the background technology, the invention provides a method for solving the problems of low definition and low intelligibility and easiness in signal influence of electric signals in the existing artificial cochlea technology. Meanwhile, in order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions in the embodiments of the present invention will be further described in detail by the following examples with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Before explaining the present invention in further detail, terms and terminology involved in the embodiments of the present invention will be explained, and the terms and terminology involved in the embodiments of the present invention are applicable to the following explanation:
<1> cochlear implant: the artificial cochlea is an electronic device, an external speech processor converts sound into an electric signal with a certain coding form, and an electrode system implanted in the human body directly excites auditory nerves to restore or reconstruct the auditory function of a patient.
<2> full wave rectification: full wave rectification is a circuit that rectifies alternating current. In such a rectifying circuit, during one half cycle, current flows through one rectifying device (e.g. a transistor diode) and during the other half cycle, current flows through the second rectifying device, the connection of the two rectifying devices enabling the current flowing through them to flow through the load in the same direction.
<3> filtering: filtering is the operation of filtering out specific band frequencies in a signal. Filtering is an important measure for suppressing and preventing interference, and is classified into classical filtering and modern filtering.
<4> low pass filter: a low pass filter is an electronic filtering device that allows signals below a cut-off frequency to pass through and signals above the cut-off frequency to not pass through.
<5> bandpass filter: a band pass filter is a filter that can pass frequency components in a certain frequency range but attenuate frequency components in other ranges to an extremely low level, and is a concept opposite to a band stop filter. The band pass filter may also be produced with a low pass filter and a high pass filter combination.
<6> downsampling: downsampling is a technique of multi-rate digital signal processing and is also understood to be a process of reducing the sampling rate of a signal, typically used to reduce the data transmission rate or data size. The filter is used in the down-sampling process to reduce the distortion caused by aliasing, because the down-sampling can generate aliasing, and the part with the down-sampling function in the system is called a down-converter.
<7> upsampling: upsampling is the collection of samples of an analog signal. The sampling is to convert the continuous signals in time and amplitude into discrete signals in time and amplitude under the action of sampling pulse. The sampling is also known as the discretization of the waveform.
<8> complex spectrum: a method of representing a signal or noise as a frequency function using a fourier transform or a sequence of complex coefficients of a fourier series.
<9> subband: among the subband coding techniques, a technique of converting an original signal from a time domain to a frequency domain, dividing it into a plurality of subbands, and digitally coding them, respectively. The frequency band with specific characteristics, which is generated by the original signal through the band-pass filter, is called a sub-band.
<10> wola (Weighted Overlap and Add) filterbank: the filter bank of the multiphase structure and the weighted overlap add structure is an efficient implementation mode of the DFT filter bank, and is characterized in that the relation between the extraction rate of data and the number of channels multiplied by integer multiple is not limited, the filter bank has higher flexibility, and the data can be efficiently and conveniently processed in multiple channels.
<11> dft (Discrete Fourier Transform): the discrete fourier transform may transform a signal from the time domain to the frequency domain, and both the time domain and the frequency domain are discrete.
<12> fft (Fast Fourier transform): the fast fourier transform is an efficient algorithm for DFT, and can be divided into time extraction and frequency extraction.
The embodiment of the invention provides an artificial cochlea sound signal coding method, an artificial cochlea signal processor, a storage medium storing an executable program for realizing the artificial cochlea sound signal coding method and an artificial cochlea for realizing the artificial cochlea sound signal coding method. With respect to implementation of the cochlear implant sound signal encoding method, exemplary implementation scenarios of the cochlear implant sound signal encoding method will be described.
Fig. 1 is a schematic flow chart of a method for encoding an artificial cochlea sound signal according to an embodiment of the present invention. The method for encoding the artificial cochlea sound signals in the embodiment mainly comprises the following steps:
Step S11: the sound signal is collected.
In some examples of the present application, the artificial cochlea collects sound signals through a built-in microphone, and the process includes: the sound wave propagates to the vibrating diaphragm of the microphone through air, and the vibrating diaphragm can shake along with the air and generate corresponding electrical signals according to the amplitude of the shake; and then the acquired electrical signals are converted into digital signals through an analog-to-digital converter in the microphone so as to be input into the artificial cochlea sound processor and processed based on the artificial cochlea sound signal coding method provided by the application.
Step S12: and carrying out low-frequency characteristic extraction and high-frequency characteristic extraction on the collected sound signals.
In order to facilitate a better understanding of the technical means for extracting low-frequency features from the collected sound signals provided by the present application, the whole process of extracting low-frequency features will be further described in detail with reference to fig. 2A and 2B.
In the embodiment of the present application, the process of extracting the low frequency characteristic in the step S12 is shown in fig. 2A, and includes three processes of full-wave rectification, low-pass filtering and downsampling extraction. Specifically, as shown in fig. 2B, the following sub-steps are included:
Step S21: and performing full-wave rectification operation on the acquired sound signals.
Preferably, before performing the full-wave rectification operation on the collected sound signal, the embodiment of the present invention further performs the following steps: the collected sound signal is decomposed into a plurality of sub-signals using a band pass filter bank. It should be understood that the band pass filter bank is made up of a plurality of band pass filters of a predetermined range. The band-pass filter is cooperatively filtered by the high-pass filter and the low-pass filter, and specifically, the cut-off frequencies of the high-pass filter and the low-pass filter can be used as the lower limit frequency and the upper limit frequency of the band-pass filter at the same time, so that the sub-signals with specific frequencies are filtered from the input signals. The band-pass filter bank correspondingly limits the frequency ranges of the plurality of sub-signals according to the number of the input electrodes of the artificial cochlea, so that the acquired complete sound signals are decomposed into the plurality of sub-signals.
It should be noted that, the reason why the band-pass filter bank is adopted to decompose the collected sound signal into a plurality of sub-signals is that the technical problem to be solved by the invention is to improve the intelligibility and definition of the coding algorithm of the existing artificial cochlea, but the interference of the interference signals such as public alternating current and the like is received in reality, if the signal interference is introduced into the subsequent coding algorithm, the interference signal aliasing is very easy to be caused, so that the intelligibility and definition of the artificial cochlea are affected. The invention adopts the band-pass filter group to filter out a plurality of low-frequency sub-signals suitable for the artificial cochlea electrode, so that the coverage range of the sub-signals is more accurate and flexible. The method has better effect on eliminating the interference signals.
In some examples of the present invention, full-wave rectification operation is achieved by a full-wave rectification circuit. The full-wave rectifying circuit in the present invention may be a center-tapped full-wave rectifying circuit or a bridge rectifying circuit. The acquired digital signal is input into a full-wave rectifying circuit to convert the current from alternating current with complete cycle into pulsating direct current, wherein the pulsating direct current refers to that the current direction is not changed, but the current size is changed with time. The full-wave rectification utilizes the signal characteristics of two half waves of alternating current, improves the efficiency of the rectifier, ensures that the rectified current is easy to smooth, provides smoother and stable current signals which contain more abundant characteristics for the artificial cochlea sound signal coding method, is used for carrying out subsequent coding processing, and reduces the loss of signal characteristics in the coding process.
Step S22: and filtering the sound signal subjected to the full-wave rectification operation to obtain low-frequency signals of a plurality of channels.
In some examples of the present invention, the process of performing a filtering operation on the sound signal subjected to the full-wave rectification operation includes the following processes: the sound signal subjected to the full-wave rectification operation is input into a low-pass filter bank of a preset frequency, and it is understood that the sound signal subjected to the full-wave rectification operation is composed of sub-signals of a plurality of channels, and the low-pass filter bank comprises a plurality of low-pass filters of different preset frequencies. Each low pass filter allows only signals below a preset frequency to pass unattenuated, while signals above the preset frequency cannot. The low-pass filter bank can further remove high-frequency noise existing in the sound signal, and increase smoothness of the sound signal, so that errors in the subsequent low-frequency feature extraction process can be reduced.
Furthermore, the invention adopts a cascade connection of second-order IIR filters to realize the low-pass filter to carry out low-pass filtering operation so as to obtain low-frequency signals of a plurality of channels. The second order IIR low pass filter algorithm is a circuit analog filter that uses a second order differential equation to describe the signal variation and to cancel noise by attenuating the high frequency signal. The cascade implementation of the second-order IIR filter can better extract sound characteristics containing richer low-frequency signal characteristics.
Step S23: and carrying out downsampling operation on the low-frequency signal according to a preset stimulation rate so as to obtain the low-frequency characteristic of the sound signal.
It should be understood that the preset stimulation rate refers to a stimulation rate preset according to the actual application, and the specific value of the preset stimulation rate is not limited in the embodiment of the present invention.
In order to facilitate a better understanding of the technical means for extracting high-frequency characteristics of the collected sound signals provided by the present invention, the whole process of extracting high-frequency characteristics will be further described in detail with reference to fig. 3A and 3B.
In the embodiment of the present invention, the process of extracting the high frequency characteristic in the step S12 is shown in fig. 3A, and includes two processes of calculating the energy spectrum and combining the channels. Specifically, as shown in fig. 3B, the following sub-steps are included:
Step S31: and filtering the acquired sound signals through a preset filter bank to obtain a high-frequency complex frequency spectrum.
Preferably, the filter bank used for the filtering operation performed in the process of extracting the high frequency features may be a Weighted Overlap-add (WOLA) filter bank. The WOLA filter bank can carry out channelized processing on the collected sound signals; the channelizing processing is used for dividing the frequency channel of the broadband sound signal into a plurality of narrower frequency sub-bands and processing the frequency sub-bands by adopting different operation types and parameters. The implementation process of the channelizing treatment is as follows: the WOLA filter sequentially carries out translation operation on the input sound signals in the time domain according to the window length of the window function in a window function mode, and different operation types and parameters are adopted for processing the sound signals in different sections in the translation operation process. The types of operations to be performed on the sound signal include: data weighting operations, superposition operations, FFT (Fast Fourier transform) fast fourier transform operations, and complex modulation operations.
It should be noted that the WOLA filter bank is free from many limitations of DFT polynomial filtering structures commonly used in the existing signal processing, for example, DFT (Discrete Fourier Transform) discrete fourier transform requires that the number of channels be consistent with the decimation multiple, and the signal cannot be segmented. The WOLA filter bank retains the richer high-frequency characteristics in the original sound signals through the segmentation refinement treatment and the flexible signal processing mode of the sound signals, and improves the intelligibility and resolution of the artificial cochlea for the high-frequency sound signals.
Step S32: and calculating the energy value of the high-frequency complex spectrum to obtain a high-frequency energy spectrum.
Illustratively, performing energy spectrum calculation on a high-frequency complex spectrum obtained through filtering operation of a WOLA filter bank; the method comprises the steps of performing open square operation on the numerical value of an energy spectrum to avoid overflow of the data of the energy spectrum; the energy spectrum calculation process is to take Root Mean Square (RMS) of the numerical value of the high-frequency complex spectrum as the energy value of each sub-band to obtain the energy spectrum of the signal, wherein the energy spectrum is used for representing the energy distribution of the energy value of each sub-band of the signal.
Step S33: and carrying out channel combination operation according to the high-frequency energy spectrum, and outputting characteristic energy spectrums of a plurality of sub-bands.
Illustratively, the process of performing a channel merge operation from the high frequency energy spectrum includes: and customizing a frequency band allocation table according to actual requirements, and combining the energy spectrum into a characteristic energy spectrum output with a plurality of sub-frequency bands based on the frequency band allocation table. The customized frequency band allocation table has the advantages that the flexibility of the high-frequency characteristic energy spectrum merging mode is improved, the merging mode can be adjusted according to actual requirements, the high-frequency characteristic is reserved to the greatest extent, and therefore the resolution, the intelligibility and the accuracy of the artificial cochlea are improved.
Step S34: and sequencing the characteristic energy spectrums of the plurality of sub-bands from large to small, reserving characteristic energy spectrum values of a plurality of sub-bands with the front sequencing in the energy spectrums, and setting the characteristic energy spectrum values of the rest sub-bands to be null so as to generate and obtain high-frequency characteristics.
In some examples of the present invention, the characteristic energy spectrum of the plurality of sub-bands is ordered from large to small, and the characteristic energy spectrum values of the plurality of sub-bands with the front order in the energy spectrum are retained, and the characteristic energy spectrum values of the remaining sub-bands are set to be null, so as to generate the obtained high-frequency characteristic, which is called a characteristic value selecting operation. The characteristic value selecting operation aims at selecting the characteristic energy corresponding to the main sound information, eliminates the interference of high-frequency noise, interference and invalid information noise with smaller energy values by setting the energy values of other sub-frequency bands to be zero, and retains the effective high-frequency characteristic information, thereby improving the resolution of high-frequency signals and finally improving the definition and the intelligibility of the received high-frequency sound information by a cochlear implant wearer.
Preferably, the processes of band-pass Filter bank and low-frequency feature extraction described in steps S21 to S23 may be calculated by a Filter Engine (Filter Engine); the processes of the WOLA filter bank and the high-frequency feature extraction described in the steps S31 to S34 can be realized by a configurable signal processing accelerator (HEAR Configurable Accelerator), and the rest of the coding functions can be realized by a digital signal processor.
Step S13: and performing interval mapping operation on the extracted low-frequency characteristics and the extracted high-frequency characteristics based on corresponding preset parameters respectively to obtain a low-frequency mapping interval and a high-frequency mapping interval respectively.
In order to facilitate a better understanding of the technical means for performing the interval mapping operation on the high-frequency feature and the low-frequency feature provided by the present invention, the whole process of the interval mapping operation will be further described in detail with reference to fig. 4A and fig. 4B.
In the embodiment of the present invention, the process of performing the interval mapping operation in the step S13 is shown in fig. 4A, and includes a mapping window operation, a log taking operation, and a current mapping operation. Specifically, as shown in fig. 4B, the following sub-steps are included:
step S41: and adopting a plurality of groups of threshold parameters to respectively carry out mapping window amplitude limiting operation on the low-frequency characteristic and the high-frequency characteristic of the input mapping window.
In some examples of the present invention, the specific process of performing the mapping window clipping operation on the low frequency characteristic and the high frequency characteristic of the input mapping window by using multiple sets of threshold parameters respectively includes: judging the energy characteristic value of the input mapping interval based on the threshold value parameter of the mapping window, if the energy characteristic value of the current sub-band of the input mapping interval is smaller than the minimum value of the mapping window, setting the energy characteristic value to be zero, namely the energy characteristic value does not generate output corresponding to division; if the energy characteristic value of the current sub-band of the input mapping interval is larger than the maximum value of the mapping window, setting the energy characteristic value of the input mapping interval as the maximum value of the mapping window; otherwise, the energy characteristic value of the input mapping interval is kept.
It should be noted that, the purpose of performing the mapping window clipping operation is to limit the value of the energy eigenvalue to a certain range, so as to eliminate the adverse effect possibly introduced by the energy eigenvalue of the singular sample in the sample. The mapping window amplitude limiting operation improves the precision in the sound signal processing process, eliminates errors possibly introduced by singular samples, reduces the calculation cost and further reduces errors generated in the sound signal encoding process.
Step S42: and mapping the low-frequency characteristic and the high-frequency characteristic distribution subjected to the mapping window amplitude limiting operation into a current energy level interval in a preset range.
In some examples of the invention, the process of obtaining the timing information of the low frequency feature and the timing information of the high frequency feature includes: the energy characteristic value after the mapping window clipping operation is mapped to an energy level (level) in decibels (dB) by means of logarithmic mapping, so as to obtain a logarithmic mapping of the energy characteristic of the mapping window clipping operation. The logarithmic operation has the advantage that the characteristic information existing in the sound signal can be amplified, so that the current containing more sound signal characteristics can be provided for the electrode channel corresponding to the cochlear implant electrode in the linear mapping operation, and the resolution and definition of the cochlear implant can be improved.
Further, the linear mapping operation maps the logarithmic value subjected to the logarithmic operation to a specified current energy level interval and outputs the logarithmic value to the electrode of the cochlear implant, so that the electrode of the cochlear implant can generate a stimulation current according to the amplitude of the output and the electrode channel corresponding to the amplitude of the output at a specific speed and amplitude.
Step S14: and combining and outputting the low-frequency mapping interval and the high-frequency mapping interval so as to enable the electrode channel of the artificial cochlea implant to generate the stimulation current covering the low frequency domain and the high frequency domain.
As shown in fig. 5, a procedure of performing a merging operation of the low frequency mapping section and the high frequency mapping section is shown. The low-frequency characteristic output and the high-frequency characteristic output are respectively controlled by different mapping windows, and the low-frequency characteristic is extracted by a band-pass filter group according to different characteristic extraction modes; the high frequency characteristics are extracted by a WOLA filter bank, whose characteristic output amplitudes are typically not the same even under conditions where the input low and high frequency acoustic signals have the same sound pressure level. At this time, by applying different mapping window thresholds to the low and high frequency characteristics, it is possible to normalize acoustic signals having the same sound pressure level to the same current level.
More preferably, as shown in fig. 6, after the timing information of the high-frequency feature is obtained and before the combined timing feature is output, the timing information of the high-frequency feature in the current level interval distributed and mapped in the preset range is up-sampled, and the timing information of the high-frequency feature is aligned with the timing information of the low-frequency feature, so that the timing information of the low-frequency feature and the timing information of the high-frequency feature are combined and output. Wherein the manner employed in the upsampling process may be linear interpolation or second order Bezier spline interpolation.
It should be noted that the up-sampling process may enable the timing information of the high frequency features to be consistent with the timing information of the low frequency features. The time sequence information refers to the sampling frequency of the high-frequency characteristic information and the low-frequency characteristic information. The reason for the sampling rate of the high frequency characteristic information and the low frequency characteristic information is due to: when the WOLA filter bank is used for transforming the sound signal from the time domain to the frequency domain, the WOLA filter bank can lead to the sampling rate of the high-frequency characteristic to be far lower than that of the low-frequency characteristic signal, and the inconsistent sampling frequency can lead to the problem that the high-frequency characteristic signal and the low-frequency characteristic signal cannot be combined and further can not be output. The process of up-sampling the time sequence information of the high-frequency characteristic unifies the sampling frequency of the high-frequency characteristic information and the low-frequency characteristic information, so that the high-frequency characteristic information and the low-frequency characteristic information can be combined and output, thereby being beneficial to the stimulation current output by the electrode of the artificial cochlea implant and simultaneously comprising the high-frequency characteristic and the low-frequency characteristic, and improving the resolution and the intelligibility of the artificial cochlea.
The method for encoding the artificial cochlea sound signals provided in the embodiment of the present invention is exemplified above, and the artificial cochlea signal processor provided in the embodiment of the present invention will be explained below.
Fig. 7 is a schematic structural diagram of a cochlear implant signal processor according to an embodiment of the present invention. In this embodiment, the cochlear implant signal processor 700 includes:
the signal acquisition module 71: for collecting sound signals;
feature extraction module 72: the method is used for extracting low-frequency characteristics and high-frequency characteristics of the collected sound signals;
the section mapping module 73: the method comprises the steps of performing interval mapping operation on extracted low-frequency features and high-frequency features based on corresponding preset parameters respectively to obtain a low-frequency mapping interval and a high-frequency mapping interval respectively;
the signal output module 74: the electrode channel is used for combining and outputting the low-frequency mapping interval and the high-frequency mapping interval so as to generate stimulation current covering a low frequency domain and a high frequency domain for the electrode channel of the artificial cochlea implant.
It should be noted that: in the cochlear implant signal processor provided in the above embodiment, when encoding the cochlear implant sound signal, only the division of the program modules is used for illustration, in practical application, the processing and distribution can be completed by different program modules according to needs, that is, the internal structure of the device is divided into different program modules to complete all or part of the processing described above. In addition, the cochlear implant signal processor and the cochlear implant sound signal encoding method provided in the above embodiments are all of the same conception, and detailed implementation processes thereof are referred to the method embodiments, and are not described herein again.
In order to facilitate a better understanding of the method for encoding a cochlear implant sound signal provided by the present application, the whole process of the method for encoding a cochlear implant sound signal of the present application will be further described in detail below with reference to fig. 8.
In this embodiment of the present application, a band-pass filter bank constituted by a plurality of fourth-order IIR sub-filters is employed for the filtering operation in the low-frequency feature extraction process, and a WOLA filter bank is employed for the filtering operation in the high-frequency feature extraction process.
Specifically, the method for encoding the artificial cochlea sound signals in the embodiment mainly comprises the following steps:
step S801: the sound signal is collected and passed into the input stage.
The input stage of the present embodiment is a signal frame composed of a time domain signal of a specific length. As shown in fig. 9, the sound information collected in this embodiment is a sound signal with a sampling rate fs=16 kHz and a duration of 2.5s, and the input stage frame length is 64, that is, one frame is 4 ms.
Step S802: the input sound signal is passed into a band pass filter bank to obtain a plurality of sub-signals.
As shown in fig. 10, the band-pass filter bank used in the present embodiment is composed of 10 (p=10) IIR sub-filters, each of which is a cascade implementation of two second-order nodes. For any low frequency electrode channel p=1, 2, …,10, the sub-filter structure can be defined by a 2×6 matrix S (p) Representation, i.e.
Wherein a represents the denominator coefficient of the filter coefficient matrix; b represents the molecular coefficient thereof.
Its transfer function H (p) (z) can be expressed as:
where q represents a second-order section number.
Further, the input signal X is sequentially filtered to obtain 10 sub-signals Y (p) The frequency response range increases gradually, expressed as:
Y (p) (z)=H (p) (z) X (z), (formula 3)
Wherein X (z) represents the Laplace transform of X; y is Y (p) (z) represents Y (p) Laplace transform of (C).
Step S803: and extracting low-frequency characteristics from the plurality of sub-signals.
As shown in FIG. 11, for the above sub-signal Y (p) Full wave rectification, i.e.
Y ′(p) =|Y (p) I, (equation 4)
Further, a second order IIR filter pair Y is used ′(p) The signal is subjected to a low-pass filtering operation to obtain its envelope Y ″(p) 。
Assume that the single channel stimulation rate of the cochlear implant system used in this example is 1kHz, i.e., 4 energy eigenvalues are output per frame for generating a stimulation signal. Downsampling the envelope by a decimation factor d=16, the low frequency characteristic output of any frame in this example is:
wherein the method comprises the steps ofAn i-th sample value representing the p-th channel output envelope.
Step S804: and inputting the input sound signals into a WOLA filter bank for WOLA analysis so as to obtain a high-frequency complex frequency spectrum.
In one embodiment of the present invention, by adopting a single-channel fast fourier transform (Fast Fourier Transfer, FFT) window length K of 256 and a wola analysis window length La of 512, and transforming a signal in the frequency band range of 0 to Fs/2=8 kHz into a complex spectrum composed of 128 sub-bands, the bandwidth of each sub-band is 62.5Hz, and the formula of the high-frequency complex spectrum can be expressed as:
wherein W is K =e j2π/K D=64 is a downsampling factor, k is an output subband number, τ is an input frame number, h (τ) is a WOLA analysis prototype filter, each signal frame has a corresponding high frequency complex spectrum U (k) 。
Step S805: and performing high-frequency characteristic extraction operation on the high-frequency complex spectrum to obtain energy characteristics of a plurality of sub-bands.
In this embodiment, each subband energy E (k) is calculated for an arbitrary frame, and can be expressed as:
E (k) =|U (k) | 2 (equation 7)
Further, the 128 subband energies are combined into M new subbands according to the band allocation table, in this example, taking m=12, denoted as:
E ′(m) =g·(E (t) +E (t+1) +…+E (t′) ) M=1, 2, …,12, (formula 8)
Wherein E is ′(m) Representing the energy of the m-th sub-band after combination, t and t' represent the start and end indexes of the sub-band in the band allocation table, respectively, and g is an attenuation factor for avoiding data overflow. Squaring the value to obtain its energy characteristic A (m) Expressed as:
step S806: and performing characteristic value selection and large operation on the output multiple sub-band energy characteristics to obtain high-frequency characteristics.
In this embodiment, the process of the eigenvalue macro operation includes: n maxima are reserved in the 12 sub-band energy characteristics generated after combination, the corresponding electrode numbers are recorded, and the rest channel energy characteristic values are set to zero. In this example, taking n=6, the output high-frequency characteristic output with selected and large characteristic value is outputtedCan be expressed as:
wherein,indicating the selection of N larger sub-frequenciesWith energy characteristic value, denoted as B (1) ,B (2) ,…,B (N) And will be left->The energy characteristic value of the channel is set to zero.
Step S807: and respectively inputting the obtained low-frequency characteristics and the obtained high-frequency characteristics into interval mapping I and interval mapping II.
In this embodiment, the process of the interval mapping operation includes: using two different sets of mapping window parameters Xmin (1) ,Xmax (1) And Xmin (2) ,Xmax (2) Respectively outputs the low frequency characteristicsAnd high frequency characteristic output B (1) ,B (2) ,…,B (N) Performing window clipping operations, i.e.
Further, for the window clipping operationB (n) Performing a log compression operation may be expressed as:
further, the above equation is mapped linearly into the interval [ C, T ], expressed as:
wherein the method comprises the steps of
Step S808: and (3) carrying out interpolation operation on the high-frequency characteristic value of the interval mapping II operation so that the sampling rate of the high-frequency characteristic is aligned with the sampling rate of the low-frequency characteristic.
In this embodiment, linear interpolation is used to up-sample the mapped output of the high frequency portion of the system to time-align it with the low frequency portion. High frequency mapped output for the τ frameCalculate the first interpolation result +.>Expressed as:
further, a second interpolation result is calculatedExpressed as:
further, a second interpolation result is calculatedExpressed as:
the high frequency output of the current frame can be expressed as:
step S809: and combining and outputting the low-frequency characteristic value subjected to the interval mapping I operation and the high-frequency characteristic value subjected to the interval mapping II and interpolation operation.
In this embodiment, the high-frequency output and the low-frequency output are combined, and the output of the τ frame of the input frame under the current parameter configuration may be as follows:
the 1 st to 10 th channels of the low frequency part are fixed output channels, 6 channels with larger energy characteristics are selected from the 11 th to 22 th channels of the high frequency part to generate stimulation output, and the obtained output is shown as 12 th, 14 th, 16 th, 18 th, 19 th and 20 th channels, and the rest 11 th, 13 th, 15 th, 17 th, 21 st and 22 th channels are all set to zero. The above steps are repeated until all signal frames of the input example signal are traversed. As shown in fig. 12, the cochlear implant used with the above algorithm has a total of 22 (n+p=22) electrode channels producing a stimulation output, and each channel has a mutually independent signal envelope waveform to define the electrode output.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by computer program related hardware. The aforementioned computer program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
In the embodiments provided herein, the computer-readable storage medium may include read-only memory, random-access memory, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, U-disk, removable hard disk, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. In addition, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable and data storage media do not include connections, carrier waves, signals, or other transitory media, but are intended to be directed to non-transitory, tangible storage media. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
In summary, the application provides an artificial cochlea sound signal coding method, a processor, a medium and an artificial cochlea, and provides the artificial cochlea sound signal coding method for improving the definition and the intelligibility of the artificial cochlea, which is applicable to different coding methods for a high-frequency characteristic part and a low-frequency characteristic part in an acquired sound signal respectively, thereby improving the performance of an artificial cochlea sound signal coding algorithm in signal processing equipment with low power consumption and low calculation power in the prior art. Therefore, the application effectively overcomes various defects in the prior art and has high industrial utilization value.
The above embodiments are merely illustrative of the principles of the present application and its effectiveness, and are not intended to limit the application. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the application. Accordingly, it is intended that all equivalent modifications and variations of the application be covered by the claims, which are within the ordinary skill of the art, be within the spirit and scope of the present disclosure.
Claims (10)
1. The artificial cochlea sound signal coding method is characterized by being applied to an artificial cochlea signal processor; the method comprises the following steps:
Collecting sound signals;
extracting low-frequency characteristics and high-frequency characteristics of the collected sound signals;
performing interval mapping operation on the extracted low-frequency characteristics and the extracted high-frequency characteristics based on corresponding preset parameters respectively to obtain a low-frequency mapping interval and a high-frequency mapping interval respectively;
and combining and outputting the low-frequency mapping interval and the high-frequency mapping interval so as to enable the electrode channel of the artificial cochlea implant to generate the stimulation current covering the low frequency domain and the high frequency domain.
2. The method for encoding a cochlear implant sound signal according to claim 1, wherein the process of extracting the low frequency characteristics of the collected sound signal comprises:
performing full-wave rectification operation on the collected sound signals;
filtering the sound signal subjected to full-wave rectification operation to obtain low-frequency signals of a plurality of channels;
and carrying out downsampling operation on the low-frequency signal according to a preset stimulation rate so as to obtain the low-frequency characteristic of the sound signal.
3. The method for encoding a cochlear implant sound signal according to claim 2, wherein the sound signal subjected to the full-wave rectifying operation is subjected to a filtering operation using a low-pass filter.
4. The method for encoding a cochlear implant sound signal according to claim 1, wherein the process of extracting the high-frequency characteristics of the collected sound signal comprises:
filtering the collected sound signals through a preset filter bank to obtain a high-frequency complex frequency spectrum;
calculating the energy value of the high-frequency complex spectrum to obtain a high-frequency energy spectrum;
carrying out channel combination operation according to the high-frequency energy spectrum and outputting characteristic energy spectrums of a plurality of sub-bands;
and sequencing the characteristic energy spectrums of the plurality of sub-bands from large to small, reserving characteristic energy spectrum values of a plurality of sub-bands with the front sequencing in the energy spectrums, and setting the characteristic energy spectrum values of the rest sub-bands to be null so as to generate and obtain high-frequency characteristics.
5. The method for encoding cochlear implant sound signals of claim 4, wherein the predetermined filter bank comprises a WOLA filter bank.
6. The method for encoding a cochlear implant sound signal according to claim 1, wherein the process of performing the interval mapping operation on the extracted low-frequency features and the high-frequency features, respectively, includes:
performing mapping window amplitude limiting operation on the low-frequency characteristic and the high-frequency characteristic of the input mapping window by adopting a plurality of groups of threshold parameters; the mapping window is a group of threshold parameters comprising a maximum value parameter and a minimum value parameter;
And mapping the low-frequency characteristic and the high-frequency characteristic distribution subjected to the mapping window amplitude limiting operation into a current energy level interval in a preset range to obtain the time sequence information of the low-frequency characteristic and the time sequence information of the high-frequency characteristic.
7. The method for encoding a cochlear implant sound signal according to claim 6, wherein the process of performing the interval mapping operation on the high frequency features includes, after obtaining the timing information of the high frequency features and before outputting, further performing the following steps:
and performing up-sampling operation on the time sequence information of the high-frequency characteristic in the current energy level interval distributed and mapped in the preset range, and aligning the time sequence information of the high-frequency characteristic with the time sequence information of the low-frequency characteristic so as to combine and output the time sequence information of the low-frequency characteristic and the time sequence information of the high-frequency characteristic.
8. A cochlear implant signal processor, wherein a communication connection is established with a cochlear implant, the cochlear implant signal processor comprising:
the signal acquisition module: for collecting sound signals;
and the feature extraction module is used for: the method is used for extracting low-frequency characteristics and high-frequency characteristics of the collected sound signals;
an interval mapping module: the method comprises the steps of performing interval mapping operation on extracted low-frequency features and high-frequency features based on corresponding preset parameters respectively to obtain a low-frequency mapping interval and a high-frequency mapping interval respectively;
The signal output module: the electrode channel is used for combining and outputting the low-frequency mapping interval and the high-frequency mapping interval so as to generate stimulation current covering a low frequency domain and a high frequency domain for the electrode channel of the artificial cochlea implant.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any of claims 1 to 7.
10. A cochlear implant comprising the cochlear implant signal processor of claim 8 and a cochlear implant; wherein, the cochlear implant signal processor establishes communication connection with the cochlear implant.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310761240.2A CN117012212A (en) | 2023-06-26 | 2023-06-26 | Artificial cochlea sound signal coding method, processor, medium and artificial cochlea |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310761240.2A CN117012212A (en) | 2023-06-26 | 2023-06-26 | Artificial cochlea sound signal coding method, processor, medium and artificial cochlea |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117012212A true CN117012212A (en) | 2023-11-07 |
Family
ID=88568142
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310761240.2A Pending CN117012212A (en) | 2023-06-26 | 2023-06-26 | Artificial cochlea sound signal coding method, processor, medium and artificial cochlea |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117012212A (en) |
-
2023
- 2023-06-26 CN CN202310761240.2A patent/CN117012212A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Lyon | A computational model of filtering, detection, and compression in the cochlea | |
US8359195B2 (en) | Method and apparatus for processing audio and speech signals | |
CN102318371B (en) | Senior envelope encoding tonal sound processing method and system | |
CN103219012B (en) | Double-microphone noise elimination method and device based on sound source distance | |
CN101642399B (en) | Artificial cochlea speech processing method based on frequency modulation information and artificial cochlea speech processor | |
US6980665B2 (en) | Spectral enhancement using digital frequency warping | |
CN103761974B (en) | Cochlear implant | |
EP0123626A1 (en) | Method and apparatus for simulating aural response information | |
CN105228069A (en) | A kind of digital deaf-aid dynamic range compression method based on sound pressure level segmentation | |
US8948424B2 (en) | Hearing device and method for operating a hearing device with two-stage transformation | |
CN102157156A (en) | Single-channel voice enhancement method and system | |
US6701291B2 (en) | Automatic speech recognition with psychoacoustically-based feature extraction, using easily-tunable single-shape filters along logarithmic-frequency axis | |
CN117059120A (en) | Signal enhancement processing method of bone conduction earphone | |
EP3056022A1 (en) | Method for extracting temporal features from spike-like signals | |
CN103557925B (en) | Underwater target gammatone discrete wavelet coefficient auditory feature extraction method | |
CN117012212A (en) | Artificial cochlea sound signal coding method, processor, medium and artificial cochlea | |
CN109200469A (en) | A kind of Optimized Coding and system of the perception of enhancing artificial cochlea's tone | |
CN111150934B (en) | Evaluation system of Chinese tone coding strategy of cochlear implant | |
CN104123947A (en) | A sound encoding method and system based on band-limited orthogonal components | |
Sun et al. | An RNN-based speech enhancement method for a binaural hearing aid system | |
Derouiche et al. | IMPLEMENTATION OF THE DEVELOPMENT OF AFiltering ALGORITHM TO IMPROVE THE SYSTEM OF HEARING IN HEARING IMPAIRED WITH COCHLEAR IMPLANT | |
Mai et al. | A cochlear system with implant DSP | |
CN109893340B (en) | Method and device for processing voice signal of electronic cochlea | |
AU2021102795A4 (en) | Optimized coding method and system for enhancing tone perception of cochlear implant | |
Ghamry | FPGA Implementation of Hearing Aids using Stationary Wavelet-Packets for Denoising |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |