EP1858292B2 - Hearing device and method of operating a hearing device - Google Patents
Hearing device and method of operating a hearing device Download PDFInfo
- Publication number
- EP1858292B2 EP1858292B2 EP06120253.7A EP06120253A EP1858292B2 EP 1858292 B2 EP1858292 B2 EP 1858292B2 EP 06120253 A EP06120253 A EP 06120253A EP 1858292 B2 EP1858292 B2 EP 1858292B2
- Authority
- EP
- European Patent Office
- Prior art keywords
- class
- sub
- parameter set
- acoustic environment
- audio signals
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims description 33
- 230000000694 effects Effects 0.000 claims description 67
- 230000006870 function Effects 0.000 claims description 52
- 238000012546 transfer Methods 0.000 claims description 49
- 230000005236 sound signal Effects 0.000 claims description 38
- 238000012545 processing Methods 0.000 claims description 35
- 238000012935 Averaging Methods 0.000 claims description 30
- 239000000203 mixture Substances 0.000 claims description 8
- 230000003247 decreasing effect Effects 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 239000013598 vector Substances 0.000 description 19
- 230000006978 adaptation Effects 0.000 description 16
- 230000008859 change Effects 0.000 description 9
- 230000008901 benefit Effects 0.000 description 7
- 230000006835 compression Effects 0.000 description 7
- 238000007906 compression Methods 0.000 description 7
- 239000010749 BS 2869 Class C1 Substances 0.000 description 6
- 239000010750 BS 2869 Class C2 Substances 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000010606 normalization Methods 0.000 description 4
- 230000008447 perception Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000003321 amplification Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000001427 coherent effect Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 230000035807 sensation Effects 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 208000032041 Hearing impaired Diseases 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 210000000613 ear canal Anatomy 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000007794 irritation Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000002463 transducing effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/70—Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2225/00—Details of deaf aids covered by H04R25/00, not provided for in any of its subgroups
- H04R2225/41—Detection or adaptation of hearing aid parameters or programs to listening situation, e.g. pub, forest
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/40—Arrangements for obtaining a desired directivity characteristic
- H04R25/407—Circuits for combining signals of a plurality of transducers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/50—Customised settings for obtaining desired overall acoustical characteristics
- H04R25/502—Customised settings for obtaining desired overall acoustical characteristics using analog signal processing
Definitions
- the invention relates to a method for operating a hearing device and to a hearing device.
- a hearing device a device is understood, which is worn adjacent to or in an individual's ear with the object to improve the individual's acoustical perception. Such improvement may also be barring acoustical signals from being perceived in the sense of hearing protection for the individual.
- the hearing device is tailored so as to improve the perception of a hearing impaired individual towards hearing perception of a "standard" individual, then we speak of a hearing-aid device.
- a hearing device may be applied behind the ear, in the ear, completely in the ear canal or may be implanted.
- a hearing system comprises at least one hearing device.
- a hearing system comprises, in addition, another device, which is operationally connected to said hearing device, e.g., another hearing device or a remote control.
- Modern hearing devices in particular, hearing-aid devices, when employing different hearing programs (typically two to four; also referred to as audiophonic programs), permit their adaptation to varying acoustic environments, also referred to as acoustic scenes or acoustic situations.
- the idea is to optimize the effectiveness of the hearing device for the hearing device user in all situations.
- the hearing program can be selected either via a remote control or by means of a selector switch on the hearing device itself. For many users, however, having to switch program settings is a nuisance, or it is difficult, or even impossible. It is also not always easy, even for experienced users of hearing devices, to determine, which program is suited best and offers optimum speech intellegibility at a certain point in time. An automatic recognition of the acoustic scene and a corresponding automatic switching of the program setting in the hearing device is therefore desirable.
- the switch from one hearing program to another can also be considered a change in a transfer function of the hearing device, which transfer function describes how input audio signals generated by an input transducer unit of the hearing device relate to output audio signals to be fed to an output transducer unit of the hearing device.
- acoustic environments also referred to as acoustic surroundings.
- the methods concerned involve the extraction of different characteristics from an input signal. Based on the so-derived characteristics, a pattern-recognition unit employing a particular algorithm makes a determination as to the attribution of the analyzed signal to a specific acoustic environment.
- the program change based on the classification result provides for an optimum hearing sensation for the user. It is desirable to provide for an improved automatic adaptation of the transfer function of the hearing device to a current (actual) acoustic environment.
- WO 99/65275 A1 discloses a device, e.g., a hearing device, with a signal processor, wherein parameters of the signal processor are directly steered in dependence of the input signals.
- EP 1 841 286 A2 relates to a method of adjusting a hearing aid to acoustic environments and user preferences, wherein an initial value of a signal processing parameter, such as the volume, is adjusted according to automatic or manual variations of the actual value of the parameter occurring during use of the hearing aid.
- the initial value is the value which is automatically set when the hearing aid is turned on or when it switches to a new hearing program.
- the algorithm used for adjusting the initial value may include exponential averaging of the actual values.
- AGC automatic gain control
- a device and a method for adapting a hearing aid is disclosed.
- generic hearing situations are determined. Such generic hearing situations are mathemically described by feature vectors which are orthogonal to each other. Realistic hearing situations such as "music", "speech” and so on can be composed from the generic hearing situations.
- a functional relation between feature vectors and fitting parameters is determined. A feature vector of an arbitrary audio signal is interpreted as a superposition of the feature vectors of the generic hearing situations, with weight factors that are specific to the audio signal.
- the feature vector of incoming audio signals are determined.
- fitting parameters are determined from the feature vector of incoming audio signals. Those fitting parameters allow to continuously consider mixed situations. The corresponding fitting vector can be smoothed.
- a programmable hearing aid in which sampled signals are transformed into a frequency domain and then evaluated. In dependence of the evaluation, one or more parameters of the transmission path are selected. Therein, a measure for the similarity between the current hearing situation and several predetermined hearing situations is determined. It is suggested to determine a hearing program to be used in a current hearing situation in dependence of this measure. In particular, each of the signal processing parameters to be used shall be determined by weighting signal processing parameters determined for the predetermined hearing situations with the corresponding measure for the similarity between the current hearing situation and the corresponding predetermined hearing situation.
- One object of the invention is to create a hearing device and a method for operating a hearing device, which provide for an improved automatic adaptation of its transfer function to a current acoustic environment.
- Another object of the invention is to provide for a flexibly adjustable way for automatically adapting the transfer function to a current acoustic environment.
- Another object of the invention is to provide for a safe and robust way for automatically adapting the transfer function to a current acoustic environment.
- Another object of the invention is to provide for a reliable and reproducible way for automatically adapting the transfer function to a current acoustic environment.
- Another object of the invention is to avoid that a user of the hearing device is annoyed by sudden strong changes in the transfer function.
- Another object of the invention is to avoid that a user of the hearing device is annoyed by repeated recognizable changes in the transfer function.
- the invention relates to a method for operating a hearing device as defined in claim 1.
- the method may be considered a method for adapting a transfer function of a hearing device to a current acoustic environment or to changes in an acoustic environment.
- the invention relates to a hearing device as defined in claim 9.
- the hearing device according to the invention has a number of base parameter sets. These will usually be selected such that, applied to the transfer function (or, more particularly, each applied to the corresponding sub-function), they provide for an optimum hearing sensation in a predetermined acoustic environment.
- the base parameter sets may be found during a fitting procedure (also referred to as adaptation procedure or as training procedure), e.g., in a manner that is known from hearing-aid devices with a number of hearing programs between which one can switch.
- the hearing device During the normal operation of the hearing device (which is different from a fitting or training phase), the current acoustic environment is analyzed, and a vector is derived, which contains information on the likenesses (similarities) of the current acoustic environment and each of the predetermined acoustic environments.
- the hearing device is capable of weighting the base parameter sets in dependence of their corresponding similarity value. This way, transfer function parameters can be adapted to changes of the acoustic situation in a continuous way. This adaptation is based on predetermined settings, which provides for robustness and reproducibility.
- a continuous mixture of hearing programs is achieved by mixing, in dependence of the current acoustic environment, parameters of the transfer function within the framework of predetermined base parameter settings.
- the invention takes into account that real-world acoustic environments seldomly correspond to pure sound classes like (pure) "music”, (pure) “speech” or (pure) “speech in noise”, but mostly have aspects of various classes. It takes also into account the existing knowledge of the fitter to define base parameter sets optimized for pure sound situations and builds upon this know-how.
- the invention may be considered to provide for a "mixed-mode classification" or for a “mixed-program mode”.
- Said similarity values can be obtained in a straightforward manner from evaluating the differences between a classification result for the current acoustic environment and the classification result for each of said predetermined acoustic environments.
- E.g., euclidian distances or multivariate variance analysis can be used for obtaining such a difference.
- the invention allows to prevent the occurrence of repeated strong changes in the transfer function, since it is possible to smoothly change transfer function parameters.
- the invention provides for reliable and predictable changes in the transfer function, since the framework of the base parameter sets avoids that parameters change in an undesired way or develop towards strange, inadequate settings. The latter might happen in solutions with an "automatic" adaptation of parameter sets based upon artificial cost functions, which do not fully reflect the full set of human audiological perception.
- the invention is particularly useful also in hearing systems comprising two hearing devices (one dedicated to each ear of the user), in particular if the two hearing devices cannot communicate with each other, since differences in the transfer function changes between the two hearing devices - in particular if occuring in a stepwise manner - may be easily recognizable by the user and can be rather disturbing.
- the transfer function may and usually will comprise two or more sub-functions, which shall undergo changes when the acoustic environment changes.
- the transfer function through which usually many kinds of signal processing can be realized (including filtering, amplifying, compressing and many others), is subdivided into a number of meaningfully combined parts (the sub-functions), and at least some of the sub-functions can be controlled by an associated activity parameter set.
- a sub-function e.g., beam forming, noise cancelling, feedback cancelling, dynamics processing or filtering, may be realized.
- An activity parameter set may be several (two, three, four or more) parameters (values, numbers), but it may also be just one value or, in particular, one number, which could be considered a strength or an activity setting.
- a one-number strength may, e.g., range from “off” to "fully on” (or from 0 to 1 or from 0 % to 100 %) and indicate the degree to which the corresponding sub-function shall take effect or be in force.
- the activity setting could range from an omni-directional polar pattern to a maximally focussed directional characteristic typically towards the front (nose) of the hearing device user.
- the activity parameter sets are obtained in dependence of the current acoustic environment. Accordingly, parameters of activity parameter sets are not predetermined and fixed.
- the value or values making up an activity parameter set are, during normal operation of the hearing device, frequently, typically quasi-continuously, re-calculated and updated. Therefore, the activity parameter sets are dynamic parameters sets. Accordingly, they can be considered sets of signals, referred to as activity signal sets.
- a class weight factor is derived from the corresponding class similarity factor, and, for each of said M sub-functions, said deriving of said activity parameter set comprises weighting each base parameter set assigned to the respective sub-function with the corresponding class weight factor.
- Said deriving of said class weight factors may comprise, for at least one of said N classes, multiplication with an individual class factor and/or addition of an individual class offset.
- the invention may be seen in using a time-averaged activity parameter set for controlling at least one sub-function.
- This aspect can be of great value in conjunction with the above-described aspect of the invention ("mixed-mode classification” or “mixed-program mode” aspect), but it may be applied separately therefrom, in conjunction with any hearing device, which allows for gradual changes in the transfer function during normal operation, in particular when such changes in the transfer function are accomplished or requested automatically.
- Said activity parameter set may be just one parameter of the transfer function or a number of parameters of the transfer function.
- This second aspect of the invention allows to provide for smooth changes in the transfer function, even if rather quick back-and-forth changes occur because of strongly changing acoustic environments.
- an averaging time for said time-averaging is chosen in dependence of past changes in the activity parameter set. I.e., the averaging time is chosen differently when the activity parameter set has changed a lot in the recent past with respect to when the activity parameter set has hardly changed in the recent past.
- the averaging time may be decreased, when said past changes in the activity parameter set decrease, and increased, when the said past changes in the activity parameter set increase.
- This kind of behavior can strongly decrease annoyingly fast changes in the transfer function when they are inadequate, while allowing for fast changes in the transfer function when they are necessary.
- Fig. 1 shows a diagrammatical illustration of a hearing device 1, which comprises an input transducer unit 2, e.g., a microphone or an arrangement of microphones, for transducing sound from the current (actual) acoustic environment into input audio signals S1, wherein audio signals are electrical signals, of analogue and/or digital type, which represent sound.
- the input audio signals S1 are fed to a signal processing unit 3 for processing according to a transfer function G, which can be adapted to the needs of a user of the hearing device in dependence of said current acoustic environment.
- the transfer function G is or comprises at least one sub-function.
- the transfer function G is or comprises only one sub-function g1, which is realized in a signal processing circuit 3/1.
- Said signal processing circuit 3/1 may, e.g., provide for beam forming or for noise suppression or for another part of the transfer function G.
- the signal processing circuit 3 From the input signals S1, the signal processing circuit 3 derives output audio signals S2, which are fed to an output transducer unit 5, e.g., a loudspeaker.
- the output transducer unit 5 transduces the output audio signals S2 into signals to be perceived by the user of the hearing device, e.g., into acoustic sound, as indicated in Fig. 1 .
- a set of N class similarity factors p1...pN is output, wherein each of the class similarity factors p1...pN is indicative of the similarity of said current acoustic environment with the respective predetermined acoustic environment of classes C1...CN or, put in other words, of the likeness (resemblance) of said current acoustic environment and the respective predetermined acoustic environment, or, expressed differently, of the degree of correspondence between said current acoustic environment and the respective predetermined acoustic environment.
- the classification may be accomplished in various ways known in the art.
- the input audio signals S1 may be fed to a feature extractor FE, in which a set of (technical, auditory or other) features are extracted from the input audio signals S1. That set of features is analyzed and classified in a classifier C, which also provides for further processing in order to derive said class similarity factors p1...pN.
- a feature extractor FE in which a set of (technical, auditory or other) features are extracted from the input audio signals S1. That set of features is analyzed and classified in a classifier C, which also provides for further processing in order to derive said class similarity factors p1...pN.
- Typical classes may be "speech”, “speech in noise”, “noise”, “music” or others.
- Typical features are, e.g., spectral shape, harmonic structure, coherent frequency and/or amplitude modulations, signal-to-noise ratio, spectral center of gravity, spatial distribution of sound sources and many more.
- the automatic adaptation of the transfer function G is on the one hand based on said class similarity factors p1...pN and on the other hand based on base parameter sets.
- Said base parameter sets are predetermined, and their respective values are usually obtained during a fitting procedure and/or may be at least partly pre-defined in the hearing device 1.
- each base parameter set B1/1,...,B1/N is provided per class, B1/1 for class C1, B1/2 for class C2,... and B1/N for class CN. I.e., for each class C1...CN and each sub-function, there is one base parameter set.
- Each base parameter set comprises data (typically one number or several numbers), which optimally adjust the respective sub-function to the user's needs and preferences in the respective pre-defined acoustic environment.
- the base parameter sets are mixed in dependence of their class similarity factors p1...pN. In the embodiment of Fig. 1 , this is accomplished by multiplying each base parameter set B1/1,...,B1/N with a respective class weight factor P1...PN and summing up the accordingly weighted base parameter sets B1/1,...,B1/N in a processing unit 8. Said multiplication and summing up of base parameter sets is done separately for each parameter of a base parameter set.
- Said class weight factors P1...PN are derived from said class similarity factors p1...pN.
- the class weight factor P1...PN are obtained by adding to each class similarity factor p1...pN an individual class offset o1...oN and multiplying the result (class-wise) by an individual class factor f1...fN.
- An optional normalization of the class weight factors P1...PN is not shown in Fig. 1 . This enables an adaptation of the mixing and, accordingly, of the whole automatic adaptation behaviour, to preferences of the user.
- the processing unit 8 outputs an activity parameter set a1 (generally: one for each sub-function), which is fed to the transfer function G, or, more precisely, to the respective sub-function. Accordingly, the transfer function G is adapted to the current acoustic environment in a fashion based on the predetermined base parameter sets.
- Zero beam forming activity will usually mean that an omnidirectional polar pattern of the input transducer unit 2 shall be used, and full beam forming activity will typically mean that a high sensitivity towards the front direction (along the user's nose) shall be used, with little sensitivity for sound from other directions.
- the beam former may provide for a medium emphasis of sound from the front hemisphere and only little suppression of sound from elsewhere.
- B1/1, B1/2 will usually be derived in a fitting procedure and indicate the amplification in dependence of incoming signal power that shall be used; characterized, e.g., in terms of decibel values characterizing the incoming signal power and compression values characterizing the steepness of increase of output signal with increase of incoming signal power.
- B1/1 (50dB, 2.5; 90dB, 0.8; 110dB, 0.3; 0) indicating expansion below 50dB, light compression up to 90dB, strong compression up to 110dB and limiting (infinite compression) thereabove.
- B1/1 (30dB, 2.5; 80dB, 0.4; 105dB, 0.2; 0) indicating expansion below 30dB, medium compression up to 80dB, strong compression up to 105dB and limiting thereabove.
- B1/1 (30dB, 2.5; 80dB, 0.4; 105dB, 0.2; 0) indicating expansion below 30dB, medium compression up to 80dB, strong compression up to 105dB and limiting thereabove.
- gain models are furthermore frequency-dependent, so that the base parameter sets will, in addition, comprise frequency values and, accordingly, even more decibel values and compression values (for the various frequency ranges).
- the gain model is a linear combination of the the gain model for music and the gain model for speech, obtained in processing unit 8.
- the activity parameter set a1 may be identical with this linear combination.
- Such an activity parameter set a1 is, of course, no more just a simple strength value or an activity setting.
- Such an activity parameter set a1 can already be, without further processing, the parameters used in the corresponding sub-function.
- Said class similarity factors p1, p2 can be obtained, e.g., in the following manner (in classifier unit 4):
- a number of features is extracted from the input audio signals S1, e.g., rather technical characteristics like the signal power between 200 Hz and 600 Hz relative to the overall signal power and the harmonicity of the signal, or auditory-based characteristics like common build-up and decay processes and coherent amplitude modulations.
- Each examined feature provides for at least one value in a feature vector.
- the feature vector might be (3.0; 2.6; 4.1); note that usually, there will typically be between 5 and 10 or even more features and vector components.
- the class similarity factors p1, p2 are a measure for the inverse distance between the feature vector of the current acoustic environment and the feature vector of class C1 and class C2, respectively.
- p1, p2 are measures for the closeness of the feature vector of the current acoustic environment and the feature vector of class C1 and class C2, respectively.
- a measure for said distance can be obtained, e.g., as the euclidian distance between the vectors, or by means of multivariate variance analysis.
- the current acoustic environment is more similar to class C2 than to class C1, since p1 ⁇ p2.
- each feature vector component corresponding to a specific feature
- a normalization during determining p1 ,p2 is advisable, and it is also possible to weight different features differently strong during determining p1,p2.
- a suitable normalization allows to generate class similarity factors, which lie between 0 and 1 and can therefore be expressed in percent (%), wherein the likeness of the current acoustic environment with a predetermined acoustic environment is the higher, the higher (and closer to 100 %) the corresponding class similarity factor is.
- the p1, p2 values in the two simple examples above were assumed to be class similarity factors normalized in such a way.
- Fig. 2 shows a diagrammatical illustration of a hearing device 1, which is similar to the hearing device 1 of Fig. 1 ; the underlying principle is basically the same as in Fig. 1 .
- the hearing device 1 comprises an averaging unit 9, and at least two sub-functions g1...gM are drawn.
- the class similarity factors are processed by a processing circuit 6, which outputs the class weight factors P1...PN.
- the processing circuit 6 may perform various calculations, in particular take care of individual adaptations as provided by f1...fN and o1...oN (see Fig. 1 ).
- the averaging unit 9 outputs time-averaged activity parameter sets a1* ... aM*, which are used for steering the sub-functions g1...gM.
- a preferable behaviour of the adaptation of the transfer function G shall, as far as possible, fulfill the following points:
- Fig. 3 is a schematic illustration of an activity parameter a1 and a corresponding time-averaged activity parameter a1* as a function of time t, which shall illustrate the above-depicted behaviour, wherein - for reasons of simplicity - only one parameter of an activity parameter set, or an activity parameter set comprising only one parameter is assumed.
- a1* will not fully follow a1.
- a1* slowly drifts towards a1.
- a rapid strong change in a1 will be followed by a1* rather quickly and in full.
- the averaging unit 9 receives a1(t) and outputs a1*(t).
- the averaging time ⁇ , during which a1(t)-values are averaged, is controlled in dependence of past a1(t)-values.
- a1(t) is fed to a differentiator 91, which outputs a value representative of the derivative of a1(t), i.e., a measure for the changes in a1(t).
- the absoulte value is taken (reference 92), which then is integrated (summed up) in a leaky integrator 93.
- ⁇ the time, until which the circuit reacts again to a fast change of the input after a series of former fast input changes, is determined.
- a measure for the magnitude of changes during the past time is obtained.
- the corresponding value can be multiplied with a base time constant t 0 for adjustment.
- the so-obtained value is used as the time constant ⁇ for an averager 90, which averages a1 (t) during a time span ⁇ and outputs the so-derived a1*(t).
- Using an averager with different attack and release time constants allows the averaging unit to settle towards a predetermined percentage of the dynamic range of the many fast changes, when many fast changes occur. Only when the input to the averaging unit settles, the output of the averaging unit will follow slowly.
- Both, the averaging in the averaging unit 9 and the processing in the processing unit 8 may be adjusted individually for different parameters of an activity parameter set and/or for parameter sets for different sub-functions.
- greater time constants for averaging may be chosen (e.g., via t 0 ), whereas a more rapid following of a1*(t) to a1(t) may be chosen for sub-functions that result in less strong irritations when changed.
- different ratios of attack time constants to release time constants may be chosen for different sub-functions.
- That parameter can be considered the "strength" or the "activity" of the sub-function.
- a time-averaging like the time-averaging described above, may not only be used for activity parameters (or more particularly, for each value or number of an activity parameter set), but may also be used, in general, for smoothing any other adjustments of a transfer function G. It is applicable to any (dynamically and/or continuously) adjustable processing algorithm.
Landscapes
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Neurosurgery (AREA)
- Otolaryngology (AREA)
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Stereophonic System (AREA)
Description
- The invention relates to a method for operating a hearing device and to a hearing device. Under "hearing device", a device is understood, which is worn adjacent to or in an individual's ear with the object to improve the individual's acoustical perception. Such improvement may also be barring acoustical signals from being perceived in the sense of hearing protection for the individual. If the hearing device is tailored so as to improve the perception of a hearing impaired individual towards hearing perception of a "standard" individual, then we speak of a hearing-aid device. With respect to the application area, a hearing device may be applied behind the ear, in the ear, completely in the ear canal or may be implanted. A hearing system comprises at least one hearing device. Typically, a hearing system comprises, in addition, another device, which is operationally connected to said hearing device, e.g., another hearing device or a remote control.
- Modern hearing devices, in particular, hearing-aid devices, when employing different hearing programs (typically two to four; also referred to as audiophonic programs), permit their adaptation to varying acoustic environments, also referred to as acoustic scenes or acoustic situations. The idea is to optimize the effectiveness of the hearing device for the hearing device user in all situations.
- The hearing program can be selected either via a remote control or by means of a selector switch on the hearing device itself. For many users, however, having to switch program settings is a nuisance, or it is difficult, or even impossible. It is also not always easy, even for experienced users of hearing devices, to determine, which program is suited best and offers optimum speech intellegibility at a certain point in time. An automatic recognition of the acoustic scene and a corresponding automatic switching of the program setting in the hearing device is therefore desirable.
- The switch from one hearing program to another can also be considered a change in a transfer function of the hearing device, which transfer function describes how input audio signals generated by an input transducer unit of the hearing device relate to output audio signals to be fed to an output transducer unit of the hearing device.
- There exist several different approaches to the automatic classification of acoustic environments (also referred to as acoustic surroundings). Typically, the methods concerned involve the extraction of different characteristics from an input signal. Based on the so-derived characteristics, a pattern-recognition unit employing a particular algorithm makes a determination as to the attribution of the analyzed signal to a specific acoustic environment.
- As examples for classification methods and their application in hearing systems, the following publications shall be named:
WO 01/20965 A2WO 01/22790 A2 -
WO 02/32208 A2 - Not in all acoustic environments, the program change based on the classification result provides for an optimum hearing sensation for the user. It is desirable to provide for an improved automatic adaptation of the transfer function of the hearing device to a current (actual) acoustic environment.
- From US 5'604'812, a hearing device is known, which, in absence of pre-stored hearing device settings, automatically and continuously adapts the transfer function by means of fuzzy logic. The results of such an approach may be unpredictable and might lead to undesired hearing device settings.
-
WO 99/65275 A1 EP 1 841 286 A2 - From
EP 1 404 152 A2 - From
EP 0 788 290 A1 - From
EP 1 307 072 A2 - One object of the invention is to create a hearing device and a method for operating a hearing device, which provide for an improved automatic adaptation of its transfer function to a current acoustic environment.
- Another object of the invention is to provide for a flexibly adjustable way for automatically adapting the transfer function to a current acoustic environment.
- Another object of the invention is to provide for a safe and robust way for automatically adapting the transfer function to a current acoustic environment.
- Another object of the invention is to provide for a reliable and reproducible way for automatically adapting the transfer function to a current acoustic environment.
- Another object of the invention is to avoid that a user of the hearing device is annoyed by sudden strong changes in the transfer function.
- Another object of the invention is to avoid that a user of the hearing device is annoyed by repeated recognizable changes in the transfer function.
- At least one of these objects is at least partially achieved by the methods and apparatuses according to the patent claims.
- Further objects emerge from the description and embodiments below.
- The invention relates to a method for operating a hearing device as defined in
claim 1. - The method may be considered a method for adapting a transfer function of a hearing device to a current acoustic environment or to changes in an acoustic environment.
- The invention relates to a hearing device as defined in
claim 9. - Considered under a slightly different point of view, the hearing device according to the invention has a number of base parameter sets. These will usually be selected such that, applied to the transfer function (or, more particularly, each applied to the corresponding sub-function), they provide for an optimum hearing sensation in a predetermined acoustic environment. The base parameter sets may be found during a fitting procedure (also referred to as adaptation procedure or as training procedure), e.g., in a manner that is known from hearing-aid devices with a number of hearing programs between which one can switch. During the normal operation of the hearing device (which is different from a fitting or training phase), the current acoustic environment is analyzed, and a vector is derived, which contains information on the likenesses (similarities) of the current acoustic environment and each of the predetermined acoustic environments. Instead of only being able to simply choosing that one base parameter set belonging to the highest similarity value, the hearing device is capable of weighting the base parameter sets in dependence of their corresponding similarity value. This way, transfer function parameters can be adapted to changes of the acoustic situation in a continuous way. This adaptation is based on predetermined settings, which provides for robustness and reproducibility.
- Considered under another slightly different point of view, a continuous mixture of hearing programs is achieved by mixing, in dependence of the current acoustic environment, parameters of the transfer function within the framework of predetermined base parameter settings. The invention takes into account that real-world acoustic environments seldomly correspond to pure sound classes like (pure) "music", (pure) "speech" or (pure) "speech in noise", but mostly have aspects of various classes. It takes also into account the existing knowledge of the fitter to define base parameter sets optimized for pure sound situations and builds upon this know-how.
- The invention may be considered to provide for a "mixed-mode classification" or for a "mixed-program mode".
- Within the present patent application, the processes involved in conjuntion with "classes", "classifying", "classification" and "classification" are certainly not meant to confine to solely assigning that one class to a current acoustic environment, which describes said current acoustic environment best; but it is meant to refer to any way of obtaining, for each of a multitude (2, 3, 4, 5 or more) of predetermined acoustic environments, a measure for the similarity (likeness, resemblance) of said current acoustic environment and the predetermined acoustic environment described by a respective class.
- From a certain point of view, it is nevertheless possible to split up said step of
- deriving, on the basis of said input audio signals and for each class of N classes each of which describes a predetermined acoustic environment, a class similarity factor indicative of the similarity of said current acoustic environment with the predetermined acoustic environment described by the respective class, wherein N is an integer with N ≥ 2;
- classifying, on the basis of said input audio signals, said current acoustic environment according to a set of N predetermined classes, which describe one predetermined acoustic environment each, wherein N is an integer with N ≥ 2; and
- outputting, for each of of said N classes, a class similarity factor indicative of the similarity of said current acoustic environment with the predetermined acoustic environment described by the respective class.
- Said similarity values can be obtained in a straightforward manner from evaluating the differences between a classification result for the current acoustic environment and the classification result for each of said predetermined acoustic environments. E.g., euclidian distances or multivariate variance analysis can be used for obtaining such a difference.
- The invention allows to prevent the occurrence of repeated strong changes in the transfer function, since it is possible to smoothly change transfer function parameters. On the other hand, the invention provides for reliable and predictable changes in the transfer function, since the framework of the base parameter sets avoids that parameters change in an undesired way or develop towards strange, inadequate settings. The latter might happen in solutions with an "automatic" adaptation of parameter sets based upon artificial cost functions, which do not fully reflect the full set of human audiological perception.
- The invention is particularly useful also in hearing systems comprising two hearing devices (one dedicated to each ear of the user), in particular if the two hearing devices cannot communicate with each other, since differences in the transfer function changes between the two hearing devices - in particular if occuring in a stepwise manner - may be easily recognizable by the user and can be rather disturbing.
- It can be considered an advantage of the invention, that the complex problem of automatically adapting the transfer function to a current environment is tackled basically by solving two main problems for which solutions are known: classification of a current acoustic environment, and optimally processing sound of predefined (pure) sound classes. Good ways for classifying acoustic environments are known, and good ways for deriving optimum base parameter sets for predefined sound classes are known. An activity parameter set can be obtained as an appropriate mixture of base parameter sets, wherein that mixture depends on the similarity values derived in conjunction with the classification.
- It can be considered another advantage of the invention, that it can be made backward-compatible with known hard-switching one-program-at-a-time hearing devices, since it can easily be foreseen that, instead of using a mixture of base parameter sets, only the parameters of that one base parameter set are used, which is assigned to the class with the highest similarity value.
- The transfer function may and usually will comprise two or more sub-functions, which shall undergo changes when the acoustic environment changes. I.e., the transfer function, through which usually many kinds of signal processing can be realized (including filtering, amplifying, compressing and many others), is subdivided into a number of meaningfully combined parts (the sub-functions), and at least some of the sub-functions can be controlled by an associated activity parameter set. Through a sub-function, e.g., beam forming, noise cancelling, feedback cancelling, dynamics processing or filtering, may be realized. An advantage of subdividing the transfer function into a number of sub-functions is, that specifying a sub-function and verifying that a sub-function is working correctly, is simpler than doing so with a very complex transfer function as a whole.
- An activity parameter set may be several (two, three, four or more) parameters (values, numbers), but it may also be just one value or, in particular, one number, which could be considered a strength or an activity setting. Such a one-number strength may, e.g., range from "off" to "fully on" (or from 0 to 1 or from 0 % to 100 %) and indicate the degree to which the corresponding sub-function shall take effect or be in force. E.g., in the case of a beam-former sub-function, the activity setting could range from an omni-directional polar pattern to a maximally focussed directional characteristic typically towards the front (nose) of the hearing device user.
- As will have become clear from the above, the activity parameter sets are obtained in dependence of the current acoustic environment. Accordingly, parameters of activity parameter sets are not predetermined and fixed. The value or values making up an activity parameter set are, during normal operation of the hearing device, frequently, typically quasi-continuously, re-calculated and updated. Therefore, the activity parameter sets are dynamic parameters sets. Accordingly, they can be considered sets of signals, referred to as activity signal sets.
- In one embodiment, for each of said N classes, a class weight factor is derived from the corresponding class similarity factor, and, for each of said M sub-functions, said deriving of said activity parameter set comprises weighting each base parameter set assigned to the respective sub-function with the corresponding class weight factor.
- Said deriving of said class weight factors may comprise, for at least one of said N classes, multiplication with an individual class factor and/or addition of an individual class offset.
- In a second aspect of the invention besides the "mixed-mode classification" or "mixed-program mode" aspect, the invention may be seen in using a time-averaged activity parameter set for controlling at least one sub-function. This aspect can be of great value in conjunction with the above-described aspect of the invention ("mixed-mode classification" or "mixed-program mode" aspect), but it may be applied separately therefrom, in conjunction with any hearing device, which allows for gradual changes in the transfer function during normal operation, in particular when such changes in the transfer function are accomplished or requested automatically. Said activity parameter set may be just one parameter of the transfer function or a number of parameters of the transfer function.
- This second aspect of the invention allows to provide for smooth changes in the transfer function, even if rather quick back-and-forth changes occur because of strongly changing acoustic environments.
- In one embodiment, an averaging time for said time-averaging is chosen in dependence of past changes in the activity parameter set. I.e., the averaging time is chosen differently when the activity parameter set has changed a lot in the recent past with respect to when the activity parameter set has hardly changed in the recent past.
- More particularly, the averaging time may be decreased, when said past changes in the activity parameter set decrease, and increased, when the said past changes in the activity parameter set increase. This kind of behavior can strongly decrease annoyingly fast changes in the transfer function when they are inadequate, while allowing for fast changes in the transfer function when they are necessary.
- The advantages of the apparatuses correspond to the advantages of corresponding methods.
- Further preferred embodiments and advantages emerge from the dependent claims and the figures.
- Below, the invention is described in more detail by means of examples and the included drawings. The figures show schematically:
- Fig. 1
- a diagrammatical illustration of a hearing device;
- Fig. 2
- a diagrammatical illustration of a hearing device;
- Fig. 3
- an illustration of an activity parameter and a corresponding time-averaged activity parameter as a function of time;
- Fig. 4
- an exemplary embodiment of an averaging unit.
- The reference symbols used in the figures and their meaning are summarized in the list of reference symbols. Generally, alike or alike-functioning parts are given the same or similar reference symbols. The described embodiments are meant as examples and shall not confine the invention.
-
Fig. 1 shows a diagrammatical illustration of ahearing device 1, which comprises aninput transducer unit 2, e.g., a microphone or an arrangement of microphones, for transducing sound from the current (actual) acoustic environment into input audio signals S1, wherein audio signals are electrical signals, of analogue and/or digital type, which represent sound. The input audio signals S1 are fed to asignal processing unit 3 for processing according to a transfer function G, which can be adapted to the needs of a user of the hearing device in dependence of said current acoustic environment. The transfer function G is or comprises at least one sub-function. InFig. 1 , the transfer function G is or comprises only one sub-function g1, which is realized in asignal processing circuit 3/1. Saidsignal processing circuit 3/1 may, e.g., provide for beam forming or for noise suppression or for another part of the transfer function G. - From the input signals S1, the
signal processing circuit 3 derives output audio signals S2, which are fed to anoutput transducer unit 5, e.g., a loudspeaker. Theoutput transducer unit 5 transduces the output audio signals S2 into signals to be perceived by the user of the hearing device, e.g., into acoustic sound, as indicated inFig. 1 . - An automatic adaptation of the transfer function G to said current acoustic environment is accomplished in the following manner:
- The input audio signals S1 are fed to a
classifier unit 4, in which said current acoustic environment is classified, wherein any known classification method can in principle be used. I.e., the current acoustic environment, represented by the input audio signals S1, is compared to N predetermined acoustic environments, each described by one class of a set of N predefined classes C1...CN. - A set of N class similarity factors p1...pN is output, wherein each of the class similarity factors p1...pN is indicative of the similarity of said current acoustic environment with the respective predetermined acoustic environment of classes C1...CN or, put in other words, of the likeness (resemblance) of said current acoustic environment and the respective predetermined acoustic environment, or, expressed differently, of the degree of correspondence between said current acoustic environment and the respective predetermined acoustic environment.
- The classification may be accomplished in various ways known in the art. E.g., as indicated in
Fig. 1 , the input audio signals S1 may be fed to a feature extractor FE, in which a set of (technical, auditory or other) features are extracted from the input audio signals S1. That set of features is analyzed and classified in a classifier C, which also provides for further processing in order to derive said class similarity factors p1...pN. - Today, N may typically be N = 2, N = 3, N = 4, N = 5 or possibly larger. Typical classes may be "speech", "speech in noise", "noise", "music" or others. Typical features are, e.g., spectral shape, harmonic structure, coherent frequency and/or amplitude modulations, signal-to-noise ratio, spectral center of gravity, spatial distribution of sound sources and many more.
- The automatic adaptation of the transfer function G is on the one hand based on said class similarity factors p1...pN and on the other hand based on base parameter sets. Said base parameter sets are predetermined, and their respective values are usually obtained during a fitting procedure and/or may be at least partly pre-defined in the
hearing device 1. - For each sub-function (in
Fig. 1 , there is only one sub-function g1 shown), one base parameter set B1/1,...,B1/N is provided per class, B1/1 for class C1, B1/2 for class C2,... and B1/N for class CN. I.e., for each class C1...CN and each sub-function, there is one base parameter set. Each base parameter set comprises data (typically one number or several numbers), which optimally adjust the respective sub-function to the user's needs and preferences in the respective pre-defined acoustic environment. - In order to adapt the transfer function G, and in particular each sub-function, to a current acoustic environment, for each sub-function, the base parameter sets are mixed in dependence of their class similarity factors p1...pN. In the embodiment of
Fig. 1 , this is accomplished by multiplying each base parameter set B1/1,...,B1/N with a respective class weight factor P1...PN and summing up the accordingly weighted base parameter sets B1/1,...,B1/N in aprocessing unit 8. Said multiplication and summing up of base parameter sets is done separately for each parameter of a base parameter set. - Said class weight factors P1...PN are derived from said class similarity factors p1...pN. In the example of
Fig. 1 , the class weight factor P1...PN are obtained by adding to each class similarity factor p1...pN an individual class offset o1...oN and multiplying the result (class-wise) by an individual class factor f1...fN. An optional normalization of the class weight factors P1...PN is not shown inFig. 1 . This enables an adaptation of the mixing and, accordingly, of the whole automatic adaptation behaviour, to preferences of the user. - The
processing unit 8 outputs an activity parameter set a1 (generally: one for each sub-function), which is fed to the transfer function G, or, more precisely, to the respective sub-function. Accordingly, the transfer function G is adapted to the current acoustic environment in a fashion based on the predetermined base parameter sets. - M = 1, g1: beamformer; N = 2, C1: music, C2: speech in noise. The according base parameter sets B1/1, B1/2 do not have to be derived in a fitting procedure, but can be preprogrammed by the hearing device manufacturer: B1/1 = 0, B1/2 = 1, which means that no beam forming (zero activity of g1) shall be used when the user wants to listen to music, and full beam forming (full activity of g1) shall be used when the user wants to understand a speaker in a noisy place. Zero beam forming activity will usually mean that an omnidirectional polar pattern of the
input transducer unit 2 shall be used, and full beam forming activity will typically mean that a high sensitivity towards the front direction (along the user's nose) shall be used, with little sensitivity for sound from other directions. - When the user is in an acoustic environment with p1 = 99 % and p2 = 1 %, i.e., the classification result implies that the current acoustic environment is practically pure music, the beam former (realized by sub-function g1) is run with (at least approximately) B1/1, i.e., at practically zero activity (o1 = o2 = 0, f1 = f2 = 1 implied).
- When the user is in an acoustic environment with p1 = 1 % and p2 = 99 %, i.e., the classification result implies that the current acoustic environment is practically purely speech-in-noise, the beam former (realized by sub-function g1) is run with (at least approximately) B1/2, i.e., with practically full activity (o1 = o2 = 0, f1 = f2 = 1 implied).
- When, however, the user is in an acoustic environment with p1 = 40 % and p2 = 60 % (e.g., in a restaurant situation with background music), i.e., the classification result implies that the current acoustic environment has aspects of music and somewhat stronger aspects of speech-in-noise, the beam former (realized by sub-function g1) is run with 0.4 × B1/1 + 0.6 × B1/2 , i.e., with moderate activity (o1 = o2 = 0, f1 = f2 = 1 implied). The beam former may provide for a medium emphasis of sound from the front hemisphere and only little suppression of sound from elsewhere.
- Of course, instead of the simple linear behaviour of the mixing of the base parameter sets that is exemplary discussed above, also more sophisticated (non-linear) ways of mixing the base parameter sets may be applied.
- If it is particularly important to the user to understand speech in noisy surroundings, whereas he is not particularly fond of music, this individual preference may be taken into account by using something like o1 = 0, o2 = 0.3 and/or f1 = 0.8, f2 = 1.5, or the like.
- M = 1, g1: gain model (amplification characteristic); N = 2, C1: music, C2: speech. The according base parameter sets B1/1, B1/2 will usually be derived in a fitting procedure and indicate the amplification in dependence of incoming signal power that shall be used; characterized, e.g., in terms of decibel values characterizing the incoming signal power and compression values characterizing the steepness of increase of output signal with increase of incoming signal power. E.g., B1/1 = (50dB, 2.5; 90dB, 0.8; 110dB, 0.3; 0) indicating expansion below 50dB, light compression up to 90dB, strong compression up to 110dB and limiting (infinite compression) thereabove. On the other hand, for speech, other values may be used, e.g., B1/1 = (30dB, 2.5; 80dB, 0.4; 105dB, 0.2; 0) indicating expansion below 30dB, medium compression up to 80dB, strong compression up to 105dB and limiting thereabove. These rather arbitrarily chosen numbers for the base parameter sets shall just indicate one possible way of forming base parameter sets. Usually, gain models are furthermore frequency-dependent, so that the base parameter sets will, in addition, comprise frequency values and, accordingly, even more decibel values and compression values (for the various frequency ranges).
- When the user is in an acoustic environment with p1 = 99 % and p2 = 1 %, i.e., the classification result implies that the current acoustic environment is practically pure music, the gain model (realized by sub-function g1) is run with (at least approximately) B1/1 (o1 = o2 = 0, f1 = f2 = 1 implied).
- When the user is in an acoustic environment with p1 = 1 % and p2 = 99 %, i.e., the classification result implies that the current acoustic environment is practically pure speech, the gain model (g1) is run with (at least approximately) B1/2 (o1 = o2 = 0, f1 = f2 = 1 implied).
- When, however, the user is in an acoustic environment with p1 = 40 % and p2 = 60 % (e.g., in a conversation situation with background music), i.e., the classification result implies that the current acoustic environment has aspects of music and somewhat stronger aspects of speech, the beam former (g1) is run with 0.4 × B1/1 + 0.6 × B1/2 (o1 = o2 = 0, f1 = f2 = 1 implied). I.e., the gain model is a linear combination of the the gain model for music and the gain model for speech, obtained in
processing unit 8. The activity parameter set a1 may be identical with this linear combination. Such an activity parameter set a1 is, of course, no more just a simple strength value or an activity setting. Such an activity parameter set a1 can already be, without further processing, the parameters used in the corresponding sub-function. - Of course, instead of the simple linear behaviour of the mixing of the base parameter sets that is exemplary discussed above, also more sophisticated (non-linear) ways of mixing the base parameter sets may be applied.
- Said class similarity factors p1, p2 can be obtained, e.g., in the following manner (in classifier unit 4):
- In the feature extractor FE, a number of features is extracted from the input audio signals S1, e.g., rather technical characteristics like the signal power between 200 Hz and 600 Hz relative to the overall signal power and the harmonicity of the signal, or auditory-based characteristics like common build-up and decay processes and coherent amplitude modulations. Each examined feature provides for at least one value in a feature vector. For one specific current acoustic environment (represented by the input audio signals S1), the feature vector might be (3.0; 2.6; 4.1); note that usually, there will typically be between 5 and 10 or even more features and vector components. There is one feature vector for each predetermined acoustic environment, e.g., (5.3; 1.8; 3.6) for class C1 and (1.2; 3.1; 3.9) for class C2. The class similarity factors p1, p2 are a measure for the inverse distance between the feature vector of the current acoustic environment and the feature vector of class C1 and class C2, respectively. I.e., p1, p2 are measures for the closeness of the feature vector of the current acoustic environment and the feature vector of class C1 and class C2, respectively. A measure for said distance can be obtained, e.g., as the euclidian distance between the vectors, or by means of multivariate variance analysis. For example, the inverse of the square root of the sum of the squares of the differences between the components of the vectors can be used, i.e.,
- In this case, the current acoustic environment is more similar to class C2 than to class C1, since p1 < p2.
- Of course, normalization of each feature vector component (corresponding to a specific feature), e.g., to a range from 0 to 1, and/or a normalization during determining p1 ,p2 is advisable, and it is also possible to weight different features differently strong during determining p1,p2. A suitable normalization allows to generate class similarity factors, which lie between 0 and 1 and can therefore be expressed in percent (%), wherein the likeness of the current acoustic environment with a predetermined acoustic environment is the higher, the higher (and closer to 100 %) the corresponding class similarity factor is. The p1, p2 values in the two simple examples above were assumed to be class similarity factors normalized in such a way.
-
Fig. 2 shows a diagrammatical illustration of ahearing device 1, which is similar to thehearing device 1 ofFig. 1 ; the underlying principle is basically the same as inFig. 1 . But thehearing device 1 comprises anaveraging unit 9, and at least two sub-functions g1...gM are drawn. And, the class similarity factors are processed by a processing circuit 6, which outputs the class weight factors P1...PN. The processing circuit 6 may perform various calculations, in particular take care of individual adaptations as provided by f1...fN and o1...oN (seeFig. 1 ). - The averaging
unit 9 outputs time-averaged activity parameter sets a1* ... aM*, which are used for steering the sub-functions g1...gM. The advantages of this will become clear in the following. - The above-described mixing of base-parameter sets already provides for a significant improvement over prior art hearing devices, which can only run at one of a number of predetermined hearing programs at a time, wherein these hearing programs correspond to base paramter sets, which are optimized for a corresponding predefined class. The according switching between the predetermined hearing programs in such prior art hearing devices can be annoying to the user, in particular, if similarity values for competing classes are about equal to each other (e.g., about 50 % for each of two classes). In that case, a frequent switching between hearing programs may occur. Since, by means of the above-described mixing of base-parameter sets, (quasi-) continuous adaptations of the transfer function G are possible by means of the invention (without switching), and smooth and agreeable changes will take place in most situations.
- There are, nevertheless, situations, when there might still occur undesirable recognizable changes in the transfer function G despite of the base parameter set mixing. E.g., in a car, classification may change within seconds from nearly 100 % speech (conversation at a red light) to nearly 100 % noise (acceleration) to nearly 100 % music (car radio) to nearly 100 % speech-in-noise (car radio speaker at medium or high speeds). A too fast adaptation of the transfer function may, in such a case, be undesirable.
- A preferable behaviour of the adaptation of the transfer function G shall, as far as possible, fulfill the following points:
- 1. Upon a changing acoustic situation, the hearing device shall change its signal processing sufficiently fast, but as inconspicuous to the user as possible. This should provide for an optimum performance during most of the time.
- 2. In a constantly strongly changing situation, however, the user shall not be annoyed by the partly significant changes in signal processing, which would be needed for a full adaptation to different acoustic environments.
- These features can be accomplished, at least in part, by means of the following behaviour:
- a. In a constantly strongly changing situation, the partly significant changes in signal processing, which would be needed for a full adaptation to different sound classes, shall be averaged out, in order to achieve a more constant (more stable) signal processing.
- b. When (after strong changes) an acoustic situation is (again) practically stable (for a certain span of time), the signal processing shall slowly fade towards the appropriate parameter set values (activity parameter sets) for this situation.
- c. Only, when class similarity factors have remained relatively stable for a sufficiently long time (i.e., detection of a rather constant acoustic situation for a certain span of time), the hearing device shall (again) react fast upon a detected significant change in the acoustic environment.
-
Fig. 3 is a schematic illustration of an activity parameter a1 and a corresponding time-averaged activity parameter a1* as a function of time t, which shall illustrate the above-depicted behaviour, wherein - for reasons of simplicity - only one parameter of an activity parameter set, or an activity parameter set comprising only one parameter is assumed. When fast great changes happen to a1, a1* will not fully follow a1. Later then, when changes in a1 become weaker, a1* slowly drifts towards a1. Finally, after quite a while of approximately constant conditions, a rapid strong change in a1 will be followed by a1* rather quickly and in full. - Such a behaviour can be readily implemented in form of software or otherwise. One exemplary implementation is shown in
Fig. 4 . The averagingunit 9 receives a1(t) and outputs a1*(t). The averaging time τ, during which a1(t)-values are averaged, is controlled in dependence of past a1(t)-values. - a1(t) is fed to a
differentiator 91, which outputs a value representative of the derivative of a1(t), i.e., a measure for the changes in a1(t). Therefrom, the absoulte value is taken (reference 92), which then is integrated (summed up) in aleaky integrator 93. Through a leakage factor α, the time, until which the circuit reacts again to a fast change of the input after a series of former fast input changes, is determined. - Accordingly, a measure for the magnitude of changes during the past time is obtained. The corresponding value can be multiplied with a base time constant t0 for adjustment. The so-obtained value is used as the time constant τ for an
averager 90, which averages a1 (t) during a time span τ and outputs the so-derived a1*(t). - Using an averager with different attack and release time constants (not shown) allows the averaging unit to settle towards a predetermined percentage of the dynamic range of the many fast changes, when many fast changes occur. Only when the input to the averaging unit settles, the output of the averaging unit will follow slowly.
- Both, the averaging in the
averaging unit 9 and the processing in theprocessing unit 8 may be adjusted individually for different parameters of an activity parameter set and/or for parameter sets for different sub-functions. E.g., for sub-functions, which tend to strongly annoy the user when subject to rapid changes, greater time constants for averaging may be chosen (e.g., via t0), whereas a more rapid following of a1*(t) to a1(t) may be chosen for sub-functions that result in less strong irritations when changed. In the case of an averager with different attack and release time constants (not shown), different ratios of attack time constants to release time constants may be chosen for different sub-functions. - As has already been stated above, it is possible to have just one single parameter as a1 for a sub-function. That parameter can be considered the "strength" or the "activity" of the sub-function.
- It is to be noted that a time-averaging like the time-averaging described above, may not only be used for activity parameters (or more particularly, for each value or number of an activity parameter set), but may also be used, in general, for smoothing any other adjustments of a transfer function G. It is applicable to any (dynamically and/or continuously) adjustable processing algorithm.
- It is furthermore to be noted, that the various units and parts in the Figures are merely logic units. They may be implemented in various ways, e.g., all in one processor chip or distributed over a number of processors; in one or several pieces of software and so on.
-
- 1
- hearing device
- 2
- input transducer unit, microphone unit, microphone
- 3
- signal processing unit, transmission unit
- 3/1...3/M
- signal processing circuits
- 4
- classifier unit
- 5
- output transducer unit, loudspeaker
- 6
- processing circuit
- 7
- base parameter storage unit
- 8
- processing unit
- 9
- averaging unit
- 90
- differentiator
- 91
- averager
- 92
- calculating the absolute value
- 93
- integrator
- a1...aM
- activity parameter set
- a1*...aM*
- time-averaged activity parameter set
- B1/1...BM/N
- base parameter sets
- C
- classifier
- C1...CN
- classes
- FZ
- feature extractor
- f1...fN
- individual class factor
- G
- transfer function
- g1...gM
- sub-function
- M
- number of sub-functions
- N
- number of classes
- o1...oN
- individual class offset
- p1...pN
- class similarity factor
- P1...PN
- class weight factor
- S1
- input audio signals
- S2
- output audio signals
- t
- time
- t0
- base time constant
- α
- leakage factor
- τ
- time constant for averaging, averaging time
Claims (14)
- Method for operating a hearing device (1) having an adjustable transfer function (G) comprising M sub-functions (g1...gM), wherein M is an integer with M ≥ 1, and wherein said transfer function (G) describes how input audio signals (S1) generated by an input transducer unit (2) of said hearing device (1) relate to output audio signals (S2) to be fed to an output transducer unit (5) of said hearing device (1), said method comprising the steps of- deriving said input audio signals (S1) from a current acoustic environment; and for each of said M sub-functions (g1,...,gM):- deriving, on the basis of said input audio signals (S1) and for each class of N classes (C1,...,CN) each of which describes a predetermined acoustic environment, a class similarity factor (p1;...;pN) indicative of the similarity of said current acoustic environment with the predetermined acoustic environment described by the respective class, wherein N is an integer with N ≥ 2;- deriving from N predetermined base parameter sets (B1/1,...,B1/N;...;BM/1,...,BM/N) assigned to the respective sub-function (g1;...;gM) and in dependence of said class similarity factors (p1,...,pN) an activity parameter set (a1;...;aM) for the respective sub-function (g1;...;gM), wherein each of said N base parameter sets (B1/1,...,B1/N;...;BM/1,...,BM/N) assigned to the respective sub-function (g1;...;gM) is assigned to a different class (C1;...;CN) of said N classes (C1,...,CN), wherein the activity parameter set is obtained as a mixture of base parameter sets, the mixture depending on the class similarity factors of the base parameter sets;- adjusting the respective sub-function (g1;...;gM) by means of said activity parameter set (a1;...;aM); wherein, for at least one of said M sub-functions (g1,...,gM), a time-averaged activity parameter set (a1*;...;aM*) is used for adjusting the respective at least one of said M sub-functions (g1;...;gM);
the method further comprising the steps of- choosing an averaging time (τ) for said time-averaging in dependence of past changes in the respective activity parameter set (a1;...;aM);- decreasing said averaging time (τ) when said past changes in the respective activity parameter set (a1;...;aM) decrease; and- increasing said averaging time (τ) when said past changes in the respective activity parameter set (a1;...;aM) increase. - Method according to claim 1, with M ≥ 2.
- Method according to claim 1 or claim 2, wherein the base parameter sets (B1/1,...,BM/N) are chosen such that using each of the M base parameter sets (B1/1,...,BM/1;...; B1/N,...,BM/N) assigned to one specific class of said N classes (C1,...,CN) for adjusting the sub-function (g1;...;gM) to which the respective base parameter set (B1/1;...;BM/N) is assigned provides for optimized output audio signals (S2), when said current acoustic environment is identical with the predetermined acoustic environment described by that specific class.
- Method according to one of claims 1 to 3, wherein each of said activity parameter sets (a1;...;aM) comprises a multitude of values, in particular a multitude of numbers.
- Method according to one of claims 1 to 3, wherein each of said activity parameter sets (a1;...;aM) is a single value, in particular, a single number.
- Method according to one of the preceding claims, comprising the step of- deriving, for each of said N classes (C1,...,CN), a class weight factor (P1;...;PN) from the corresponding class similarity factor (p1; ...; pN) ;wherein, for each of said M sub-functions (g1,...,gM), said deriving of said activity parameter set (a1;...;aM) comprises weighting each base parameter set (B1/1,...,B1/N;...; BM/1,...,BM/N) assigned to the respective sub-function (g1;...;gM) with the corresponding class weight factor (P1;...;PN).
- Method according to claim 6, wherein, for at least one of said N classes (C1,...,CN), said deriving of said class weight factor (P1;...;PN) comprises multiplication with an individual class factor (f1; ...; fN) and/or addition of an individual class offset (o1;...;oN).
- Method according to one of the preceding claims, wherein at least one of the group comprising beam forming, noise cancelling, feedback cancelling, dynamics processing, filtering is realized by means of at least one of said M sub-functions (g1,...,gM) .
- Hearing device (1) comprising- an input transducer unit (2) for deriving input audio signals (S1) from a current acoustic environment;- an output transducer unit (5) for receiving output audio signals (S2);- a signal processing unit (3) for deriving said output audio signals (S2) from said input audio signals (S1) by processing said input audio signals (S1) according to an adjustable transfer function (G), which adjustable transfer function (G) describes how said input audio signals (S1) relate to said output audio signals (S2) and comprises M sub-functions (g1,...,gM),wherein M is an integer with M ≥ 1;- a classifier unit (4) for deriving, on the basis of said input audio signals (S1) and for each class of N classes (C1,...,CN) each of which describes a predetermined acoustic environment, a class similarity factor (p1;...;pN) indicative of the similarity of said current acoustic environment with the predetermined acoustic environment described by the respective class, wherein N is an integer with N ≥ 2;- a base parameter storage unit (7) storing, for each of said M sub-functions (g1,...,gM), N predetermined base parameter sets (B1/1,...,B1/N;...;BM/1,...,BM/N) each assigned to a different class (C1;...;CN) of said N classes (C1,...,CN) ;
a processing unit (8) operationally connected to said base parameter storage unit (7) and adapted to deriving an activity parameter set (a1;...;aM) for each of said M sub-functions (g1,...,gM), wherein each of said activity parameter sets (a1;...;aM) is derived in dependence of said class similarity factors (p1,...,pN) from the base parameter sets (B1/1,...,B1/N;...; BM/1,...BM/N) assigned to the respective sub-function (g1;...;gM), wherein each activity parameter set is obtained as a mixture of base parameter sets, the mixture depending on the class similarity factors of the base parameter sets ;wherein each of said M sub-functions (g1,...,gM) is adjusted by means of the respective activity parameter set (a1;...;aM), and wherein said processing unit (8) comprises an averaging unit (9) for deriving, for each of at least one of said M sub-functions (g1,...,gM), a time-averaged activity parameter set (a1*;...;aM*), wherein an averaging time (τ) for said time-averaging is chosen in dependence of past changes in the respective activity parameter set (a1;...;aM), wherein said averaging time (τ) is decreased when said past changes in the respective activity parameter set (a1;...;aM) decrease, and wherein increasing said averaging time (τ) is increased when said past changes in the respective activity parameter set (a1;...;aM) increase, and wherein said at least one of said M sub-functions (g1,...,gM) is adjusted by means of the respective time-averaged activity parameter set (a1*;...;aM*). - Device (1) according to claim 9, with M ≥ 2.
- Device (1) according to claim 9 or claim 10, wherein, for each of said N classes (C1,...,CN), the M base parameter sets (B1/1,...,BM/N) assigned to one specific class of said N classes (C1,...,CN) are chosen such that optimized output audio signals (S2) are generated when said M base parameter sets (B1/1,...,BM/1;...; B1/N,...,BM/N) are each used for adjusting that sub-function (g1;...;gM) to which the respective base parameter set (B1/1;...;BM/N) is assigned and when said current acoustic environment is identical with the predetermined acoustic environment described by said specific class.
- Device (1) according to one of claims 9 to 11, wherein each of said activity parameter sets (a1;...;aM) comprises a multitude of values, in particular a multitude of numbers.
- Device (1) according to one of claims 9 to 11, wherein each of said activity parameter sets (a1;...;aM) is a single value, in particular, a single number.
- Hearing system comprising a hearing device (1) according to one of claims 9 to 13.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA002650600A CA2650600A1 (en) | 2006-05-16 | 2007-03-12 | Hearing device and method for operating a hearing device |
AU2007251717A AU2007251717B2 (en) | 2006-05-16 | 2007-03-12 | Hearing device and method for operating a hearing device |
PCT/EP2007/052281 WO2007131815A1 (en) | 2006-05-16 | 2007-03-12 | Hearing device and method for operating a hearing device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP06114038 | 2006-05-16 |
Publications (3)
Publication Number | Publication Date |
---|---|
EP1858292A1 EP1858292A1 (en) | 2007-11-21 |
EP1858292B1 EP1858292B1 (en) | 2014-06-18 |
EP1858292B2 true EP1858292B2 (en) | 2022-02-23 |
Family
ID=51059206
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP06120253.7A Active EP1858292B2 (en) | 2006-05-16 | 2006-09-07 | Hearing device and method of operating a hearing device |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP1858292B2 (en) |
DK (1) | DK1858292T4 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4415390A1 (en) * | 2023-02-13 | 2024-08-14 | Sonova AG | Operating a hearing device for classifying an audio signal to account for user safety |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DK2304972T3 (en) | 2008-05-30 | 2015-08-17 | Sonova Ag | Method for adapting sound in a hearing aid device by frequency modification |
DE102012206299B4 (en) * | 2012-04-17 | 2017-11-02 | Sivantos Pte. Ltd. | Method for operating a hearing device and hearing device |
EP3120578B2 (en) | 2014-03-19 | 2022-08-17 | Bose Corporation | Crowd sourced recommendations for hearing assistance devices |
US9497530B1 (en) * | 2015-08-31 | 2016-11-15 | Nura Holdings Pty Ltd | Personalization of auditory stimulus |
WO2018196973A1 (en) * | 2017-04-27 | 2018-11-01 | Sonova Ag | User adjustable weighting of sound classes of a hearing aid |
EP3843427B1 (en) * | 2019-12-23 | 2022-08-03 | Sonova AG | Self-fitting of hearing device with user support |
DE102021132434A1 (en) * | 2021-12-09 | 2023-06-15 | Elevear GmbH | Device for active noise and/or occlusion suppression, corresponding method and computer program |
EP4507327A1 (en) | 2023-08-09 | 2025-02-12 | Sonova AG | Operating a hearing device for classifying an audio signal |
EP4521777A1 (en) | 2023-09-07 | 2025-03-12 | Sonova AG | Operating a hearing device for optimizing sound delivery from a localized media source |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1404152A2 (en) † | 2002-09-30 | 2004-03-31 | Siemens Audiologische Technik GmbH | Device and method for fitting a hearing-aid |
EP1841286A2 (en) † | 2006-03-31 | 2007-10-03 | Siemens Audiologische Technik GmbH | Hearing aid with adaptive starting values of parameters |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DK0788290T3 (en) | 1996-02-01 | 2005-02-14 | Siemens Audiologische Technik | Programmable hearing aid |
EP0964603A1 (en) | 1998-06-10 | 1999-12-15 | Oticon A/S | Method of sound signal processing and device for implementing the method |
JP2004500750A (en) | 2001-01-05 | 2004-01-08 | フォーナック アーゲー | Hearing aid adjustment method and hearing aid to which this method is applied |
AU2001221399A1 (en) | 2001-01-05 | 2001-04-24 | Phonak Ag | Method for determining a current acoustic environment, use of said method and a hearing-aid |
DE50211346D1 (en) | 2001-10-17 | 2008-01-24 | Siemens Audiologische Technik | Method for operating a hearing aid and hearing aid |
AU2472202A (en) | 2002-01-28 | 2002-04-29 | Phonak Ag | Method for determining an acoustic environment situation, application of the method and hearing aid |
-
2006
- 2006-09-07 DK DK06120253.7T patent/DK1858292T4/en active
- 2006-09-07 EP EP06120253.7A patent/EP1858292B2/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1404152A2 (en) † | 2002-09-30 | 2004-03-31 | Siemens Audiologische Technik GmbH | Device and method for fitting a hearing-aid |
EP1841286A2 (en) † | 2006-03-31 | 2007-10-03 | Siemens Audiologische Technik GmbH | Hearing aid with adaptive starting values of parameters |
Non-Patent Citations (1)
Title |
---|
NORDQVIST P. AND LEIJON A.: "Hearing-aid automatic gain control adapting to two sound sources in the environment, using three time constants", J. ACOUST. SOC. AM., vol. 116, no. 5, November 2004 (2004-11-01) † |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP4415390A1 (en) * | 2023-02-13 | 2024-08-14 | Sonova AG | Operating a hearing device for classifying an audio signal to account for user safety |
Also Published As
Publication number | Publication date |
---|---|
DK1858292T3 (en) | 2014-07-07 |
DK1858292T4 (en) | 2022-04-11 |
EP1858292A1 (en) | 2007-11-21 |
EP1858292B1 (en) | 2014-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7957548B2 (en) | Hearing device with transfer function adjusted according to predetermined acoustic environments | |
EP1858292B2 (en) | Hearing device and method of operating a hearing device | |
EP1658754B1 (en) | A binaural hearing aid system with coordinated sound processing | |
US10575104B2 (en) | Binaural hearing device system with a binaural impulse environment detector | |
US6910013B2 (en) | Method for identifying a momentary acoustic scene, application of said method, and a hearing device | |
US8249284B2 (en) | Hearing system and method for deriving information on an acoustic scene | |
US7181033B2 (en) | Method for the operation of a hearing aid as well as a hearing aid | |
US8290190B2 (en) | Method for sound processing in a hearing aid and a hearing aid | |
EP2152161B1 (en) | Fitting procedure for hearing devices and corresponding hearing device | |
US7995781B2 (en) | Method for operating a hearing device as well as a hearing device | |
JP2005537702A (en) | Hearing aids and methods for enhancing speech clarity | |
CN107454537B (en) | Hearing device comprising a filter bank and an onset detector | |
US8224002B2 (en) | Method for the semi-automatic adjustment of a hearing device, and a corresponding hearing device | |
CN113473341A (en) | Hearing aid device comprising an active vent configured for audio classification and method for operating the same | |
EP1858291B1 (en) | Hearing system and method for deriving information on an acoustic scene | |
CN113299316A (en) | Estimating the direct reverberation ratio of a sound signal | |
US20040258249A1 (en) | Method for operating a hearing aid device and hearing aid device with a microphone system in which different directional characteristics can be set | |
AU2007251717B2 (en) | Hearing device and method for operating a hearing device | |
US20130188811A1 (en) | Method of controlling sounds generated in a hearing aid and a hearing aid | |
EP2107826A1 (en) | A directional hearing aid system | |
JP2019198073A (en) | Method for operating hearing aid, and hearing aid | |
WO2023169755A1 (en) | Method for operating a hearing aid | |
CN118355676A (en) | Method for operating a hearing device and hearing device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL BA HR MK YU |
|
17P | Request for examination filed |
Effective date: 20080222 |
|
17Q | First examination report despatched |
Effective date: 20080326 |
|
AKX | Designation fees paid |
Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
INTG | Intention to grant announced |
Effective date: 20140107 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: DK Ref legal event code: T3 Effective date: 20140703 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 673946 Country of ref document: AT Kind code of ref document: T Effective date: 20140715 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602006041931 Country of ref document: DE Effective date: 20140731 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140618 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140618 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140618 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140919 |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: VDEP Effective date: 20140618 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 673946 Country of ref document: AT Kind code of ref document: T Effective date: 20140618 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140618 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140618 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140618 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140618 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140618 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140618 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20141020 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140618 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140618 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20141018 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140618 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140618 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R026 Ref document number: 602006041931 Country of ref document: DE |
|
PLBI | Opposition filed |
Free format text: ORIGINAL CODE: 0009260 |
|
26 | Opposition filed |
Opponent name: SIEMENS MEDICAL INSTRUMENTS PTE. LTD. Effective date: 20150318 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140618 Ref country code: LU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140907 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140618 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PLAX | Notice of opposition and request to file observation + time limit sent |
Free format text: ORIGINAL CODE: EPIDOSNOBS2 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R026 Ref document number: 602006041931 Country of ref document: DE Effective date: 20150318 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: MM4A |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20140930 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20140930 Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140618 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20140907 |
|
RAP2 | Party data changed (patent owner data changed or rights of a patent transferred) |
Owner name: SONOVA AG |
|
PLBB | Reply of patent proprietor to notice(s) of opposition received |
Free format text: ORIGINAL CODE: EPIDOSNOBS3 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140618 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140618 Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20060907 Ref country code: BE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20140618 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 11 |
|
RDAF | Communication despatched that patent is revoked |
Free format text: ORIGINAL CODE: EPIDOSNREV1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
APBM | Appeal reference recorded |
Free format text: ORIGINAL CODE: EPIDOSNREFNO |
|
APBP | Date of receipt of notice of appeal recorded |
Free format text: ORIGINAL CODE: EPIDOSNNOA2O |
|
APAH | Appeal reference modified |
Free format text: ORIGINAL CODE: EPIDOSCREFNO |
|
APBQ | Date of receipt of statement of grounds of appeal recorded |
Free format text: ORIGINAL CODE: EPIDOSNNOA3O |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 12 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 13 |
|
APBU | Appeal procedure closed |
Free format text: ORIGINAL CODE: EPIDOSNNOA9O |
|
PUAH | Patent maintained in amended form |
Free format text: ORIGINAL CODE: 0009272 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: PATENT MAINTAINED AS AMENDED |
|
27A | Patent maintained in amended form |
Effective date: 20220223 |
|
AK | Designated contracting states |
Kind code of ref document: B2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R102 Ref document number: 602006041931 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: DK Ref legal event code: T4 Effective date: 20220408 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20220927 Year of fee payment: 17 Ref country code: DK Payment date: 20220928 Year of fee payment: 17 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20220926 Year of fee payment: 17 |
|
P01 | Opt-out of the competence of the unified patent court (upc) registered |
Effective date: 20230530 |
|
REG | Reference to a national code |
Ref country code: DK Ref legal event code: EBP Effective date: 20230930 |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20230907 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230907 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230907 Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230930 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20240927 Year of fee payment: 19 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230930 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20230930 |