EP1006511A1 - Sound processing method and device for adapting a hearing aid for hearing impaired - Google Patents
Sound processing method and device for adapting a hearing aid for hearing impaired Download PDFInfo
- Publication number
- EP1006511A1 EP1006511A1 EP99403027A EP99403027A EP1006511A1 EP 1006511 A1 EP1006511 A1 EP 1006511A1 EP 99403027 A EP99403027 A EP 99403027A EP 99403027 A EP99403027 A EP 99403027A EP 1006511 A1 EP1006511 A1 EP 1006511A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- parameters
- modified
- synthesis
- speech signal
- hearing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 208000032041 Hearing impaired Diseases 0.000 title claims abstract description 22
- 238000003672 processing method Methods 0.000 title 1
- 238000000034 method Methods 0.000 claims abstract description 36
- 238000001228 spectrum Methods 0.000 claims abstract description 12
- 230000015572 biosynthetic process Effects 0.000 claims description 27
- 238000003786 synthesis reaction Methods 0.000 claims description 27
- 238000004458 analytical method Methods 0.000 claims description 20
- 238000012545 processing Methods 0.000 claims description 10
- 230000003595 spectral effect Effects 0.000 claims description 9
- 230000006835 compression Effects 0.000 claims description 8
- 238000007906 compression Methods 0.000 claims description 8
- 230000006870 function Effects 0.000 claims description 8
- 238000012937 correction Methods 0.000 claims description 6
- 238000009877 rendering Methods 0.000 claims 1
- 206010011878 Deafness Diseases 0.000 description 21
- 230000009466 transformation Effects 0.000 description 12
- 231100000895 deafness Toxicity 0.000 description 10
- 208000016354 hearing loss disease Diseases 0.000 description 10
- 230000008569 process Effects 0.000 description 8
- 230000008447 perception Effects 0.000 description 7
- 238000011282 treatment Methods 0.000 description 7
- SLVOKEOPLJCHCQ-SEYXRHQNSA-N [(z)-octadec-9-enyl] 2-(trimethylazaniumyl)ethyl phosphate Chemical compound CCCCCCCC\C=C/CCCCCCCCOP([O-])(=O)OCC[N+](C)(C)C SLVOKEOPLJCHCQ-SEYXRHQNSA-N 0.000 description 6
- 230000008901 benefit Effects 0.000 description 6
- 238000001914 filtration Methods 0.000 description 5
- 102100030771 Ferrochelatase, mitochondrial Human genes 0.000 description 4
- 101000843611 Homo sapiens Ferrochelatase, mitochondrial Proteins 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 238000012546 transfer Methods 0.000 description 4
- 230000003321 amplification Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000003203 everyday effect Effects 0.000 description 3
- 238000003199 nucleic acid amplification method Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 238000000844 transformation Methods 0.000 description 3
- 230000017105 transposition Effects 0.000 description 3
- 230000001755 vocal effect Effects 0.000 description 3
- 238000005520 cutting process Methods 0.000 description 2
- 210000005069 ears Anatomy 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000000737 periodic effect Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 101500021172 Aplysia californica Myomodulin-C Proteins 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 210000000860 cochlear nerve Anatomy 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000002592 echocardiography Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 1
- 239000007943 implant Substances 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000000638 stimulation Effects 0.000 description 1
- 230000004083 survival effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/35—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception using translation techniques
- H04R25/356—Amplitude, e.g. amplitude shift or compression
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0316—Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude
- G10L21/0364—Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude for improving intelligibility
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/06—Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
- G10L2021/065—Aids for the handicapped in understanding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/50—Customised settings for obtaining desired overall acoustical characteristics
- H04R25/505—Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
Definitions
- the present invention relates to a method and a device for correction of hearing impaired sounds. It applies equally to the production of hearing aids, as software executable on personal computers or answering machines and so general to all devices intended to improve listening comfort and understanding the speech of people with deafness.
- the twentieth century saw a continuous effort in the design of machines to relieve and assist the hearing impaired.
- a first class is aimed at "mild” deafness and aims to correct the hearing and make it normal as much as possible. That's what make the usual prostheses widely available on the market.
- a second class is intended for heavier deafness and aims to transform speech into synthetic speech accessible to the hearing impaired person.
- most achievements are aimed at the "deep deaf”.
- a remarkable example is that of the cochlear implant which acts by means of electrodes by stimulation direct from the auditory nerve.
- the present invention aims to propose a solution for so-called “intermediate" deaf people. These people currently no suitable technical assistance. They are too affected to be served by standard prostheses, but their hearing is sufficient to be able to do without devices for the deaf.
- the usual prostheses generally use a method of selective amplification of speech as a function of frequency.
- an automatic regulation of the sound level acts on the gain of amplification, the aim being to give the best listening comfort and protection against instantaneous power peaks.
- these prostheses are miniaturized to be worn behind the ear or as an insert, which leads to performance relatively poor can only satisfy hearing corrections very coarse. Typically it is defined only three bands of frequencies for frequency correction. These prostheses are intended without ambiguity with the most frequent "light" deafness. More deafness can be relieved, but at the cost of painful inconvenience caused in particular by the amplification of background noise, and the phenomenon of Larsen. On the other hand there is no possibility of correction in the zones frequencies for which there is no hearing.
- the idea behind the invention is to overcome the drawbacks above using a parametric model of the speech signal capable carry out relevant transformations for hearing correction for the hearing impaired by implementing a method capable of satisfy the three constraints mentioned above.
- the subject of the invention is a method for correcting hearing impaired characterized in that it consists in extracting the parameters characterizing the pitch, the energy voicing and the spectrum of the speech signal, to modify the parameters to make the speech intelligible to the hearing impaired, and to reconstruct a speech signal noticeable by the hearing impaired from the modified parameters.
- the invention also relates to a device for setting work of the aforementioned process.
- the method and the device according to the invention have the advantages of implement the parametric models which are commonly used in vocoders for adapters to hearing impaired hearing. This allows you to no longer work on the sound signal, as do previous techniques but at the level of the symbolic structure of the signal to preserve its intelligibility
- the vocoders present in indeed the advantage of using an alphabet that incorporates the notions of "pitch”, " spectrum “,” voicing “and” energy "which are very close to the model physiological of the mouth and ear.
- SHANNON the information transmitted then carries intelligibility of speech.
- the materialization of speech intelligibility in a form IT thus opens up a new perspective. The intelligibility can thus be acquired during the analysis operation, and it is restored during the synthesis.
- the operation of synthesizing a vocoder parametric can therefore be adapted to the hearing characteristics of deaf.
- This technique associated with more conventional, allows to consider a prosthetic process particularly general that can serve a very large population, including people with intermediate deafness.
- the method and the device according to the invention offer great freedom in settings, each parameter can be modified independently of the others without reciprocal impact, with a specific setting for each ear.
- Figure 2 a parametric model of signal generation word.
- Figure 4 a transformation curve during the synthesis of speech signal the energy of the speech signal measured during the process speech signal analysis.
- FIG. 5 an embodiment of a device for setting work of the method according to the invention.
- the method for processing the speech signal according to the invention is based on a parametric model of the speech signal of the type of that commonly used in the technique of making HSX digital vocoders and a description of which can be found in the article by MM P. Gournay, F. Charotti, entitled “A 1200 bits / s HSX speech coder for very low bit rate communications ", and published in IEEE Proceedings Workshop on Signal Processing System (Sips'98), Boston, 8-10 October 1998.
- the spectral envelope of the signal can be obtained by autoregressive modeling using a linear prediction filter or by a short-term Fourier analysis synchronous with the pitch. These four parameters are estimated periodically on the speech signal of one to several times per frame depending on the parameter, for a frame duration typically between 10 and 30 ms.
- the speech signal is restored in the same way represented in figure 2, by exciting by the pitch or by a noise stochastic, a digital synthesis filter 1 which models by its transfer function the vocal tract depending on whether the sound is respectively voiced or unvoiced.
- a switch 2 transmits the pitch or noise to the input of the synthesis filter 1.
- a variable gain amplifier 3 depending on the energy of the speech signal is placed at the output of the synthesis filter 1.
- the synthesis procedure can be summarize to that shown in Figure 2.
- the process according to the invention which is represented in FIG. 3 in the form of a flowchart, is more complex and takes place in four stages, breaking down into one step 4 of preprocessing, a step 5 of analysis of the signal obtained in step 4 for the extraction of the parameters characterizing the pitch, the voicing, the energy, and the spectrum of the speech signal, a step 6 during which the parameters obtained in step 5 are modified and a step 7 of synthesis a speech signal composed from the modified parameters of step 6.
- Stage 4 is that which is conventionally implemented in vocoders. It consists notably, after having converted the speech signal in a digital signal, to reduce the background noise by using for example the method described by M.D. Malah, published in IEEE trans.Acoust., Speech Processing, Volume-12 N.6, pp.1109-1121 1984, entitled "Speech enhancement using a minimum square error short time spectral amplitude estimator ", to cancel the acoustic echoes using for example the method described in the article by MM.Murano, S. Unjani and F. Amano having for title: "Echo cancellation and applications" published in the IEEE journal Com. May, 28 (1), pp 49-55, January 1990, to place an order automatic gain, or even to boost the signal.
- the parametric processing of the speech signal obtained at the end of step 4 is carried out in step 5. It consists of cutting the speech signal into samples of constant analysis duration (typically 5 to 30 milliseconds) to perform on each of them the estimation of the parameters of the speech signal model.
- the pitch and the voicing are estimated every 22.5 milliseconds.
- the voicing information is given in the form of a transition frequency between a voiced low band and an unvoiced high band.
- the signal energy is estimated every 5.625 milliseconds. During unvoiced periods of the signal, it is estimated over a duration of 45 samples (5.625 ms) and expressed in dB per sample.
- the synthesis procedure consists, for each interval of duration of the analysis, to stimulate the synthesis filter giving S ( ⁇ ) by the sum frequency weighted (low band / high band defined by frequency voicing), a pseudo-random white noise for the high band and a Dirac comb periodic signal of fundamental frequency equal to pitch for the low band.
- transformations can be applied to the parameters from the analysis in step 5.
- Each parameter can indeed be modified independently of the others, without reciprocal interaction.
- these transformations can be constant or be activated only under special conditions (for example, triggering of the modification of the spectral envelope for certain energy distribution configurations as a function of frequency, ).
- steps 6 1 to 6 4 relate essentially to the value of the pitch which characterizes the fundamental frequency, the voicing, the energy and the spectral envelope.
- step 6 any transformation defining a new “pitch” value from the value of the analysis pitch obtained in step 5 is applicable.
- the Pitch Factor is adjustable for the type of deafness considered.
- the voicing frequency can be modified by any transformation defining a "voicing frequency For each value of the voicing frequency analyzed in step 5.
- the factor Factor Voisement is adjustable for the type of deafness considered.
- step 6 3 The energy is processed in step 6 3 As before, any transformation defining an energy from the energy of the speech signal analyzed in step 6 3 is applicable.
- the method according to the invention applies to energy a compression function with four linear segments as shown in the graph of FIG. 4.
- step 6 4 The processing of the spectral envelope takes place in step 6 4 In this step any transformation defining a spectrum S '( ⁇ ) from the spectrum S ( ⁇ ) analyzed in step 5 is applicable.
- the frequency scale is compressed by a factor Factor Spectrum so that useful bands before and after treatment are respectively equal to [O..FECH / 2] and to [O..FECH / (2 * FacteurSpectre)], where FECH is the sampling frequency of the system.
- Speech restored in step 7 can be further accelerated or slowed down by simple modification of the duration of the time interval taken into account for the summary phase.
- TimeFactor> 1 If TimeFactor> 1, then it is a slowdown in the word. If TimeFactor ⁇ 1, then it is an acceleration of speech.
- a number of post-treatments can be envisaged consisting for example of carrying out a bandpass filtering and linear equalization of the synthesized signal, or still a multiplexing of sound on both ears.
- the objective of the linear equalization operation is to compensate the patient's audiogram by amplifying or attenuating certain bands of frequencies.
- the gain at 7 frequencies (0, 125, 250, 500, 1000, 2000 and 4000 Hz) can be adjusted over time between -80 and +10 dB according to the patient's needs or the specifics of his audiogram.
- This operation can be carried out for example by filtering by a fast Fourier transform (FFT) as described for example in the book of M.D Elliott entitled “Handbook of Digital signal processing” published in 1987 by Academic Press.
- FFT fast Fourier transform
- Multiplexing operation allows monophonic playback (for example a signal processed alone) or stereophonic (for example a signal processed on one channel and an unprocessed signal on another). Restitution allows the hearing impaired to adapt the treatment to each of his ears (two linear equalizers to compensate for two different audiograms for example), and possibly keep intact on one ear a form of signal to which it is accustomed and on which it can be supported to, for example, synchronize.
- the device for implementing the method according to the invention which is represented in FIG. 5 comprises a first channel composed of an analysis device 8, a synthesis device 9 and a first equalizer 10 and d a second channel comprising a second equalizer 11, the assembly of the two channels being coupled between a sound pickup device 13 and a pair of headphones 12 a , 12 b .
- the analysis 8 and synthesis 9 devices can be implemented by borrowing the known techniques for producing vocoders and in particular that of the aforementioned HSX vocoders.
- the outputs of the equalizers of the two channels are multiplexed by a multiplexer 14 to allow reproduction of the monophonic or stereophonic sound.
- a processing device 15 formed by a microprocessor or any equivalent device is coupled to the synthesis device 9 to ensure the modification of the parameters supplied by the analysis device 8.
- a pretreatment device 16 interposed between the sound recording 13 and each of the two channels ensures denoising and conversion of the speech signal into digital samples
- the samples denoised digital are applied respectively to the input of the equalizer 11 and at the input of the analysis device 8.
- the processing device 15 can be integrated into the processing device.
- synthesis 9 as it is also possible to integrate all the treatments analysis and synthesis in the same software executable on a personal computer or answering machine for example.
Landscapes
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- Neurosurgery (AREA)
- Multimedia (AREA)
- Otolaryngology (AREA)
- Circuit For Audible Band Transducer (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Stereophonic System (AREA)
- Percussion Or Vibration Massage (AREA)
- Auxiliary Devices For Music (AREA)
Abstract
Le procédé consiste à extraire (5) les paramètres caractérisant le pitch, le voisement, l'énergie et le spectre du signal de parole, à modifier (6) les paramètres pour rendre la parole intelligible à un malentendant, et à reconstituer (7) un signal de parole perceptible par le malentendant à partir des paramètres modifiés. Application : prothèses auditives. <IMAGE>The method consists in extracting (5) the parameters characterizing the pitch, the voicing, the energy and the spectrum of the speech signal, in modifying (6) the parameters to make speech intelligible to a hearing impaired person, and in reconstructing (7) a speech signal perceptible to the hearing impaired from the modified parameters. Application: hearing aids. <IMAGE>
Description
La présente invention concerne un procédé et un dispositif de correction des sons pour malentendants. Elle s'applique aussi bien à la réalisation de prothèses auditives, que de logiciels exécutables sur des ordinateurs personnels ou des répondeurs téléphoniques et de manière générale à tous dispositifs destinés à améliorer le confort d'écoute et la compréhension de la parole des personnes atteintes de surdité.The present invention relates to a method and a device for correction of hearing impaired sounds. It applies equally to the production of hearing aids, as software executable on personal computers or answering machines and so general to all devices intended to improve listening comfort and understanding the speech of people with deafness.
Le problème posé par les malentendants provient essentiellement du caractère spécifique et dégradé de leur perception auditive.The problem with hearing impaired people comes mainly from the specific and degraded nature of their auditory perception.
Dans son besoin de communiquer, l'homme a depuis l'aube des temps construit un mode de communication oral, la parole, qui s'appuie sur les caractéristiques moyennes de production (la voix) et de perception (l'oreille) du signal sonore. Le langage courant est donc celui du plus grand nombre. A contrario l'audition du malentendant est très éloignée de la moyenne et le langage courant lui est difficilement, voire pas du tout, accessible.In his need to communicate, man has since the dawn of time builds a mode of oral communication, speech, which is based on average production (voice) and perception characteristics (the ear) of the sound signal. Current language is therefore that of the greatest number. Conversely, the hearing of the hearing impaired is very far from average and everyday language is hardly, if at all, accessible.
La compréhension du langage courant est un passage obligé pour l'intégration du malentendant dans sa communauté. Dans ce qui peut être considéré comme un réflexe de survie sociale, tout malentendant est amené naturellement à se constituer un langage à lui et à mettre en oeuvre des procédés, des techniques, et une stratégie de communication lui permettant de transposer le langage commun vers son langage particulier. Un exemple connu et spectaculaire est celui de la lecture labiale, qui permet d'accéder à la parole normale au travers d'un alphabet visuel de position des lèvres.Understanding everyday language is a must for integration of the hearing impaired into their community. In what can be considered a reflex of social survival, all hearing impaired are brought naturally to form a language of their own and to implement processes, techniques, and a communication strategy enabling it to transpose the common language to its particular language. An example known and spectacular is that of lip reading, which provides access to normal speech through a visual alphabet of lip position.
Le vingtième siècle a vu un effort continu dans la conception de machines destinées à soulager les malentendants et à les aider.The twentieth century saw a continuous effort in the design of machines to relieve and assist the hearing impaired.
Deux classes de machines ont été développées.Two classes of machines have been developed.
Une première classe s'adresse aux surdités « légères » et vise à corriger l'audition et à la rendre normale, autant que possible. C'est ce que font les prothèses usuelles largement disponibles sur le marché.A first class is aimed at "mild" deafness and aims to correct the hearing and make it normal as much as possible. That's what make the usual prostheses widely available on the market.
Une deuxième classe s'adresse aux surdités plus lourdes et vise à réaliser une transformation de la parole en une parole de synthèse accessible à la personne malentendante. Dans cette catégorie la plupart des réalisations s'adressent aux "sourds profonds". Un exemple remarquable est celui de l'implant cochléaire qui agit au moyen d'électrodes par stimulation directe du nerf auditif.A second class is intended for heavier deafness and aims to transform speech into synthetic speech accessible to the hearing impaired person. In this category most achievements are aimed at the "deep deaf". A remarkable example is that of the cochlear implant which acts by means of electrodes by stimulation direct from the auditory nerve.
La présente invention vise à proposer une solution pour les personnes souffrant de surdité dites « intermédiaires ». Ces personnes n'ont actuellement pas d'aide technique adaptée. Elles sont trop touchées pour être servies par les prothèses usuelles, mais leur acquis auditif est suffisant pour pouvoir se passer des dispositifs pour sourds profonds.The present invention aims to propose a solution for so-called "intermediate" deaf people. These people currently no suitable technical assistance. They are too affected to be served by standard prostheses, but their hearing is sufficient to be able to do without devices for the deaf.
Les prothèses usuelles mettent généralement en oeuvre un procédé d'amplification sélective de la parole en fonction de la fréquence. Dans sa mise en oeuvre un automatisme de régulation du niveau sonore agit sur le gain d'amplification, le but étant de donner le meilleur confort d'écoute et une protection contre les pics de puissance instantanés.The usual prostheses generally use a method of selective amplification of speech as a function of frequency. In its implementation an automatic regulation of the sound level acts on the gain of amplification, the aim being to give the best listening comfort and protection against instantaneous power peaks.
Pour des raisons de stratégie commerciale et en réponse à la demande des patients, ces prothèses sont miniaturisées pour être portées en contour d'oreille ou en insert, ce qui conduit à des performances relativement médiocres ne pouvant satisfaire que des corrections auditives très grossières. Typiquement, il est défini seulement trois bandes de fréquences pour la correction fréquencielle. Ces prothèses s'adressent sans ambiguïté aux surdités "légères" les plus fréquentes. Des surdités plus lourdes peuvent être soulagées, mais au prix d'inconvénients pénibles causés notamment par l'amplification du bruit de fond, et le phénomène du Larsen. D'autre part il n'y a pas de possibilité de correction dans les zones fréquencielles pour lesquelles il n'existe pas d'audition.For reasons of commercial strategy and in response to the patient requests, these prostheses are miniaturized to be worn behind the ear or as an insert, which leads to performance relatively poor can only satisfy hearing corrections very coarse. Typically it is defined only three bands of frequencies for frequency correction. These prostheses are intended without ambiguity with the most frequent "light" deafness. More deafness can be relieved, but at the cost of painful inconvenience caused in particular by the amplification of background noise, and the phenomenon of Larsen. On the other hand there is no possibility of correction in the zones frequencies for which there is no hearing.
Sur l'historique des prothèses pour sourds profond on peut se reporter utilement aux travaux de M.J.M.TATO professeur d'ORL et MM VIGNERON et LAMOTTE cités dans l'article M J C LAFON ayant pour titre "Transposition et modulation", publié au bulletin d'audiophonologie annales scientifiques de Franche Comté, Volume XII, N°3&4, monographie 164, 1996. Ces prothèses exploitent le fait que les sourds sont rarement complètement sourds, et qu'un très faible reliquat de perception persiste, souvent dans les graves, dont il a souvent été essayé de tirer parti.On the history of prostheses for the deaf we can see usefully refer to the work of M.J.M.TATO professor of ORL and MM VIGNERON and LAMOTTE cited in the article M J C LAFON having for title "Transposition and modulation", published in the audiophonology bulletin annales scientists of Franche Comté, Volume XII, N ° 3 & 4, monograph 164, 1996. These prostheses exploit the fact that deaf people are rarely completely deaf, and that a very weak balance of perception persists, often in the bass, which it has often been tried to take advantage of.
C'est ainsi qu'il est possible de redonner de manière très rustique, une perception du son aux sourds par des procédés dits de «transposition » des aiguës vers les graves. Malheureusement la compréhension du langage exige plus qu'une simple perception et il s'avère que la transmission de l'intelligibilité est inséparable d'une nécessaire « richesse » du son. Redonner cette « richesse » est devenu un des principaux sujets de préoccupation. C'est ainsi qu'il a été envisagé de créer une parole de synthèse dans le but de restituer les éléments structurels qui forment le support à l'intelligibilité du langage courant.This is how it is possible to give back in a very rustic way, perception of the deaf sound by so-called “transposition” processes from treble to bass. Unfortunately the understanding of language requires more than just perception and it turns out that the transmission of intelligibility is inseparable from a necessary "richness" of sound. Giving back this “wealth” has become one of the main subjects of concern. It was thus that it was envisaged to create a word of synthesis in order to restore the structural elements that form the support for the intelligibility of everyday language.
La technique mise en oeuvre en 1952 par M J.M. TATO, consiste à enregistrer la parole dite très rapidement et à la restituer à vitesse moitié. Ceci permet d'effectuer une transposition d'un octave dans les graves, tout en conservant la structure de la parole initiale. Des essais ont montré un certain avantage pour les sourds.The technique implemented in 1952 by M J.M. TATO, consists to record the speech spoken very quickly and to restore it at half speed. This allows transposition of an octave in the bass, while retaining the original speech structure. Tests have shown a some benefit for the deaf.
Mais le procédé présente l'inconvénient de ne pouvoir être utilisée qu'en temps différé. La technique développée en 1971 par MM C.VIGNERON et M.LAMOTTE permet d'effectuer une adaptation en «temps réel »par une découpe du temps en intervalles de 1/100 de secondes en supprimant un intervalle sur deux, et en appliquant le procédé de M J.M.TATO sur les intervalles restants. Mais ce système présente malheureusement un bruit de fond important.However, the method has the disadvantage that it cannot be used that in deferred time. The technique developed in 1971 by MM C.VIGNERON and M.LAMOTTE allows adaptation in "time real "by cutting time into 1/100 second intervals in removing every second interval, and applying the method of M J.M.TATO on the remaining intervals. But this system presents unfortunately a significant background noise.
L'idée de construire des sons « naturels » est également présente dans une prothèse également citée sous le nom GALAXIE dans l'article de M JC LAFON. Cette prothèse met en oeuvre une batterie de filtres et de mélangeurs répartis sur six sous bandes et réalise une transposition dans les graves utilisables pour les sourds profonds.The idea of building "natural" sounds is also present in a prosthesis also cited under the name GALAXIE in the article by Mr. JC LAFON. This prosthesis implements a battery of filters and mixers distributed over six sub-bands and transposes in the bass usable for the deaf.
Malheureusement, ces procédés qui interviennent au niveau du signal présentent trop de distorsions et un inconfort d'écoute trop important pour pouvoir être utilisés par les personnes souffrant de surdités intermédiaires.Unfortunately, these processes which occur at the level of signal have too much distortion and too much listening discomfort to be used by people with deafness intermediate.
De l'article de M Jean Claude LAFON se dégagent trois
orientations qui peuvent être retenues dans la réalisation d'un bon traitement
prothétique.
L'idée à l'origine de l'invention est de pallier les inconvénients précités en utilisant un modèle paramétrique du signal de parole capable d'effectuer des transformations pertinentes en vue d'une correction auditive pour des malentendants en mettant en oeuvre une méthode capable de satisfaire les trois contraintes citées précédemment.The idea behind the invention is to overcome the drawbacks above using a parametric model of the speech signal capable carry out relevant transformations for hearing correction for the hearing impaired by implementing a method capable of satisfy the three constraints mentioned above.
A cet effet l'invention a pour objet, un procédé pour la correction auditive des malentendants caractérisé en qu'il consiste à extraire les paramètres caractérisant le pitch, le voisement l'énergie et le spectre du signal de parole, à modifier les paramètres pour rendre la parole intelligible à un malentendant, et à reconstituer un signal de parole perceptible par le malentendant à partir des paramètres modifiés.To this end, the subject of the invention is a method for correcting hearing impaired characterized in that it consists in extracting the parameters characterizing the pitch, the energy voicing and the spectrum of the speech signal, to modify the parameters to make the speech intelligible to the hearing impaired, and to reconstruct a speech signal noticeable by the hearing impaired from the modified parameters.
L'invention a également pour objet , un dispositif pour la mise en oeuvre du procédé précité.The invention also relates to a device for setting work of the aforementioned process.
Le procédé et le dispositif selon l'invention ont pour avantages de mettre en oeuvre les modèles paramétriques qui sont couramment utilisés dans les vocodeurs pour les adaptateur à l'audition des malentendants. Ceci permet de travailler non plus au niveau du signal sonore, comme le font les techniques antérieures mais au niveau de la structure symbolique du signal de parole afin d'en préserver son intelligibilité Les vocodeurs présentent en effet l'avantage d'utiliser un alphabet qui intègre les notions de « pitch », « spectre », « voisement» et « énergie » qui sont très proches du modèle physiologique de la bouche et de l'oreille. En vertu de la théorie de SHANNON, l'information transmise est alors bien porteuse de l'intelligibilité de la parole. La matérialisation de l'intelligibilité de la parole sous une forme informatique ouvre ainsi une perspective nouvelle. L'intelligibilité peut ainsi être acquise lors de l'opération d'analyse, et elle est restituée lors de la synthèse.The method and the device according to the invention have the advantages of implement the parametric models which are commonly used in vocoders for adapters to hearing impaired hearing. This allows you to no longer work on the sound signal, as do previous techniques but at the level of the symbolic structure of the signal to preserve its intelligibility The vocoders present in indeed the advantage of using an alphabet that incorporates the notions of "pitch", " spectrum "," voicing "and" energy "which are very close to the model physiological of the mouth and ear. Under the theory of SHANNON, the information transmitted then carries intelligibility of speech. The materialization of speech intelligibility in a form IT thus opens up a new perspective. The intelligibility can thus be acquired during the analysis operation, and it is restored during the synthesis.
Grâce à l'invention, l'opération de synthèse d'un vocodeur paramétrique peut dès lors être adaptée aux caractéristiques auditives des malentendants. Cette technique, associée à des procédés plus conventionnels, permet d'envisager un procédé prothétique particulièrement général pouvant servir une population très large, et notamment les personnes souffrant de surdité intermédiaire.Thanks to the invention, the operation of synthesizing a vocoder parametric can therefore be adapted to the hearing characteristics of deaf. This technique, associated with more conventional, allows to consider a prosthetic process particularly general that can serve a very large population, including people with intermediate deafness.
Comme autre avantage le procédé et le dispositif selon l'invention offrent une grande liberté dans les réglages, chaque paramètre pouvant être modifié indépendamment des autres sans impact réciproque, avec un réglage spécifique pour chaque oreille.As another advantage, the method and the device according to the invention offer great freedom in settings, each parameter can be modified independently of the others without reciprocal impact, with a specific setting for each ear.
D'autres caractéristiques et avantages de l'invention apparaítront à l'aide de la description qui suit faite en regard des dessins annexés qui représentent:Other characteristics and advantages of the invention will appear with the aid of the description which follows made with reference to the appended drawings which represent:
La figure 1, les paramètres de modélisation du signal de parole utilisés dans la mise en oeuvre de l'invention.Figure 1, the parameters of the speech signal modeling used in the implementation of the invention.
La figure 2, un modèle paramétrique de production du signal de parole.Figure 2, a parametric model of signal generation word.
La figure 3, les différentes étapes nécessaires à la mise en oeuvre du procédé selon l'invention sous la forme d'un organigramme.Figure 3, the different steps required to set up work of the method according to the invention in the form of a flowchart.
La figure 4, une courbe de transformation lors de la synthèse du signal de parole de l'énergie du signal de parole mesurée lors du processus d'analyse du signal de parole.Figure 4, a transformation curve during the synthesis of speech signal the energy of the speech signal measured during the process speech signal analysis.
La figure 5, un mode de réalisation d'un dispositif pour la mise en oeuvre du procédé selon l'invention.Figure 5, an embodiment of a device for setting work of the method according to the invention.
Le procédé de traitement du signal de parole selon l'invention est basé sur une modélisation paramétrique du signal de parole du type de celle couramment mises en oeuvre dans la technique de réalisation des vocodeurs numériques HSX et dont une description peut être trouvée dans l'article de MM P.Gournay, F. Charité, intitulé "A 1200 bits/s HSX speech coder for very low bit rate communications", et publié dans IEEE Proceedings Workshop on Signal Processing System (Sips'98), Boston, 8-10 Octobre 1998.The method for processing the speech signal according to the invention is based on a parametric model of the speech signal of the type of that commonly used in the technique of making HSX digital vocoders and a description of which can be found in the article by MM P. Gournay, F. Charité, entitled "A 1200 bits / s HSX speech coder for very low bit rate communications ", and published in IEEE Proceedings Workshop on Signal Processing System (Sips'98), Boston, 8-10 October 1998.
Ce modèle est défini principalement par quatre paramètres :
- un paramètre de voisement qui décrit le caractère plus ou moins périodique des sons voisés ou aléatoire des sons non voisés du signal de parole,
- un paramètre définissant la fréquence fondamentale ou « PITCH » des sons voisés,
- un paramètre représentatif de l'évolution temporelle de l'énergie,
- et un paramètre représentatif de l'enveloppe spectrale du signal de parole.
- a voicing parameter which describes the more or less periodic nature of the voiced or random sounds of the unvoiced sounds of the speech signal,
- a parameter defining the fundamental frequency or “PITCH” of the voiced sounds,
- a parameter representative of the time evolution of the energy,
- and a parameter representative of the spectral envelope of the speech signal.
L'enveloppe spectrale du signal, ou « spectre », peut être obtenue par une modélisation autorégressive à l'aide d'un filtre de prédiction linéaire ou par une analyse de Fourier à court terme synchrone avec le pitch. Ces quatre paramètres sont estimés périodiquement sur le signal de parole de une à plusieurs fois par trame selon le paramètre, pour une durée trame typiquement comprise entre 10 et 30 ms.The spectral envelope of the signal, or "spectrum", can be obtained by autoregressive modeling using a linear prediction filter or by a short-term Fourier analysis synchronous with the pitch. These four parameters are estimated periodically on the speech signal of one to several times per frame depending on the parameter, for a frame duration typically between 10 and 30 ms.
La restitution du signal de parole s'effectue de la façon représentée à la figure 2, en excitant par le pitch ou par un bruit stochastique, un filtre numérique de synthèse 1 qui modélise par sa fonction de transfert le conduit vocal selon que respectivement le son est voisé ou non voisé.The speech signal is restored in the same way represented in figure 2, by exciting by the pitch or by a noise stochastic, a digital synthesis filter 1 which models by its transfer function the vocal tract depending on whether the sound is respectively voiced or unvoiced.
Un commutateur 2 assure la transmission du pitch ou du bruit à
l'entrée du filtre de synthèse 1.A
Un amplificateur 3 de gain variable en fonction de l'énergie du signal de parole est placé en sortie du filtre de synthèse 1.A variable gain amplifier 3 depending on the energy of the speech signal is placed at the output of the synthesis filter 1.
Dans le cas d'un modèle paramétrique simple comportant une
décision binaire son voisé / son non voisé, la procédure de synthèse peut se
résumer à celle représentée sur la figure 2. Cependant le procédé selon
l'invention qui est représenté à la figure 3 sous la forme d'un organigramme,
est plus complexe et se déroule en quatre étapes, se décomposant en une
étape 4 de prétraitement, une étape 5 d'analyse du signal obtenu à l'étape 4
pour l'extraction des paramètres caractérisant le pitch, le voisement,
l'énergie, et le spectre du signal de parole, une étape 6 durant laquelle les
paramètres obtenus à l'étape 5 sont modifiés et une étape 7 de synthèse
d'un signal de parole composé à partir des paramètres modifiés de l'étape 6.In the case of a simple parametric model comprising a
binary decision its voiced / its unvoiced, the synthesis procedure can be
summarize to that shown in Figure 2. However, the process according to
the invention which is represented in FIG. 3 in the form of a flowchart,
is more complex and takes place in four stages, breaking down into one
step 4 of preprocessing, a step 5 of analysis of the signal obtained in step 4
for the extraction of the parameters characterizing the pitch, the voicing,
the energy, and the spectrum of the speech signal, a
L'étape 4 est celle qui est classiquement mise en oeuvre dans les vocodeurs. Elle consiste notamment, après avoir converti le signal de parole en un signal numérique, à réduire le bruit de fond en utilisant par exemple la méthode décrite par M.D.Malah, publié dans IEEE trans.Acoust.,Speech Processing, Volume-12 N.6, pp.1109-1121 1984, ayant pour titre "Speech enhancement using a minimum square error short time spectral amplitude estimator", à annuler les échos acoustique en utilisant par exemple la méthode décrite dans l'article de MM.K.Murano, S. Unjani et F. Amano ayant pour titre: "Echo cancellation and applications" publié dans la revue IEEE Com. May, 28 (1), pp 49-55, janvier 1990, à réaliser une commande automatique de gain, ou encore à préaccentuer le signal.Stage 4 is that which is conventionally implemented in vocoders. It consists notably, after having converted the speech signal in a digital signal, to reduce the background noise by using for example the method described by M.D. Malah, published in IEEE trans.Acoust., Speech Processing, Volume-12 N.6, pp.1109-1121 1984, entitled "Speech enhancement using a minimum square error short time spectral amplitude estimator ", to cancel the acoustic echoes using for example the method described in the article by MM.Murano, S. Unjani and F. Amano having for title: "Echo cancellation and applications" published in the IEEE journal Com. May, 28 (1), pp 49-55, January 1990, to place an order automatic gain, or even to boost the signal.
Le traitement paramétrique du signal de parole obtenu en fin de
l'étape 4 est effectué à l'étape 5. Il consiste à découper le signal de parole
en échantillons de durée constante Tanalyse (typiquement 5 à 30
millisecondes) pour réaliser sur chacun d'eux l'estimation des paramètres du
modèle de signal de parole. En utilisant le modèle d'analyse HSX décrite
dans l'article de MM.Gournay et F.Chartier cité précédemment, le pitch et le
voisement sont estimés toutes les 22,5 millisecondes. L'information de
voisement est donnée sous la forme d'une fréquence de transition entre une
bande basse voisée et une bande haute non voisée. L'énergie du signal est
estimée toutes les 5,625 millisecondes. Lors des périodes non voisées du
signal, elle est estimée sur une durée de 45 échantillons (5,625 ms) et
exprimée en dB par échantillon. Lors des périodes voisées du signal, elle est
estimée sur un nombre entier de périodes fondamentales au minimum égal
à 45 et exprimée en dB par échantillon. L'enveloppe spectrale S(ω) est
estimée toutes les 11,25 millisecondes. Elle est obtenue par prédiction
linéaire (LPC) par une modélisation auto-régressive d'un filtre d'ordre OLPC=
16 de fonction de transfert :
où A(z) est défini par:
The parametric processing of the speech signal obtained at the end of step 4 is carried out in step 5. It consists of cutting the speech signal into samples of constant analysis duration (typically 5 to 30 milliseconds) to perform on each of them the estimation of the parameters of the speech signal model. Using the HSX analysis model described in the article by Messrs. Gournay and F. Chartier cited above, the pitch and the voicing are estimated every 22.5 milliseconds. The voicing information is given in the form of a transition frequency between a voiced low band and an unvoiced high band. The signal energy is estimated every 5.625 milliseconds. During unvoiced periods of the signal, it is estimated over a duration of 45 samples (5.625 ms) and expressed in dB per sample. During voiced signal periods, it is estimated over a whole number of fundamental periods at least equal to 45 and expressed in dB per sample. The spectral envelope S (ω) is estimated every 11.25 milliseconds. It is obtained by linear prediction (LPC) by an auto-regressive modeling of a filter of order OLPC = 16 of transfer function:
where A (z) is defined by:
Dans ce qui suit les paramètres issus de l'analyse sont notés :
La procédure de synthèse consiste, pour chaque intervalle de durée Tanalyse, à stimuler le filtre de synthèse donnant S(ω) par la somme pondérée en fréquence (bande basse/ bande haute définie par la fréquence de voisement), d'un bruit blanc pseudo aléatoire pour la bande haute et d'un signal périodique en peigne de Dirac de fréquence fondamentale égale au pitch pour la bande basse.The synthesis procedure consists, for each interval of duration of the analysis, to stimulate the synthesis filter giving S (ω) by the sum frequency weighted (low band / high band defined by frequency voicing), a pseudo-random white noise for the high band and a Dirac comb periodic signal of fundamental frequency equal to pitch for the low band.
Selon l'invention de nombreuses transformations peuvent être appliquées aux paramètres issus de l'analyse de l'étape 5. Chaque paramètre peut en effet être modifié indépendamment des autres, sans interaction réciproque. De plus, ces transformations peuvent être constantes ou n'être activées que dans des conditions particulières (par exemple, déclenchement de la modification de l'enveloppe spectrale pour certaines configurations de répartition de l'énergie en fonction de la fréquence,... ).According to the invention, numerous transformations can be applied to the parameters from the analysis in step 5. Each parameter can indeed be modified independently of the others, without reciprocal interaction. In addition, these transformations can be constant or be activated only under special conditions (for example, triggering of the modification of the spectral envelope for certain energy distribution configurations as a function of frequency, ...).
Ces modifications sont effectuées aux étapes 61 à 64 et elles
portent essentiellement sur la valeur du pitch qui caractérise la fréquence
fondamentale, le voisement, l'énergie et l'enveloppe spectrale.These modifications are carried out in
Pour le déroulement de l'étape 6 1 toute transformation définissant
une nouvelle valeur de « pitch » à partir de la valeur du pitch d'analyse
obtenue à l'étape 5 est applicable.For the course of
La transformation élémentaire est l'homothétie, définie par la
relation :
Le facteur Facteur Pitch est ajustable pour le type de surdité considéré.The Pitch Factor is adjustable for the type of deafness considered.
Comme pour le pitch la fréquence de voisement peut être modifiée par toute transformation définissant une « fréquence de voisement » pour chaque valeur de la fréquence de voisement analysée à l'étape 5.As for the pitch, the voicing frequency can be modified by any transformation defining a "voicing frequency For each value of the voicing frequency analyzed in step 5.
Dans l'exemple de mise en oeuvre de l'invention la transformation
choisie est une homothétie, définie par la relation :
Lorsque la fréquence de transition de voisement issue de l'analyse VoisementAnalyse est maximum (signal entièrement voisé, VoisementAnalyse = VoisementMaximum) la fréquence de voisement utilisée en synthèse est inchangée (VoisementSynthèse = VoisementMaximum). Lui appliquer un facteur multiplicatif serait en effet totalement arbitraire (VoisementAnalyse = VoisementMaximum ne signifie pas une absence de voisement au dessus de VoisementMaxmum). A titre d'exemple VoisementMaximum peut être fixé à 3625 Hz.When the voicing transition frequency from AnalysisVoisementAnalyse is maximum (fully voiced signal, VoisementAnalyse = VoisementMaximum) the voicing frequency used in synthesis is unchanged (VoisementSynthèse = VoisementMaximum). Applying a multiplicative factor to it would indeed totally arbitrary (VoisementAnalyse = VoisementMaximum does not mean not an absence of voicing above VoisementMaxmum). As example VoisementMaximum can be set to 3625 Hz.
Le facteur Facteur Voisement est ajustable pour le type de surdité considéré.The factor Factor Voisement is adjustable for the type of deafness considered.
Le traitement de l'énergie est effectué à l'étape 63 Comme
précédemment toute transformation définissant une énergie à partir de
l'énergie du signal de parole analysé à l'étape 6 3 est applicable. Dans
l'exemple décrit ci après le procédé selon l'invention applique à l'énergie une
fonction de compression à quatre segments linéaires de la manière
représentée sur le graphe de la figure 4.The energy is processed in
L'énergie utilisée en synthèse est donnée par la relation :
Pente = PenteBasse pour EnergieAnalyse < EnergieAnalyseSeuil ;
Pente = PenteHaute pour EnergieAnalyse >= EnergieAnalyseSeuil ;
et avec les limitations suivantes :
EnergieSynthèse <= EnergieSynthèseMax ;
EnergieSynthèse = - Infini pour EnergieAnalyse < EnergieAnalyseMin.
Les paramètres de traitements EnergieAnalyseMin, EnergieSynthèseMax,
PenteBasse, PenteHaute et EnergieSynthèseSeuil sont ajustables pour le
type de surdité considéré.The energy used in synthesis is given by the relation:
Slope = Low Slope for EnergieAnalyse <EnergieAnalyseSeuil;
Slope = High Slope for EnergieAnalyse> = EnergieAnalyseSeuil;
and with the following limitations:
EnergieSynthèse <= EnergieSynthèseMax;
EnergieSynthèse = - Infinite for EnergieAnalyse <EnergieAnalyseMin.
The EnergieAnalyseMin, EnergieSynthèseMax, SlopeLow, SlopeHigh and EnergieSynthèseSeuil treatment parameters are adjustable for the type of deafness considered.
Le traitement de l'enveloppe spectrale a lieu à l'étape 6 4 Dans
cette étape toute transformation définissant un spectre S'(ω) à partir du
spectre
S(ω) analysé à l'étape 5 est applicable.The processing of the spectral envelope takes place in
S (ω) analyzed in step 5 is applicable.
Dans le mode de réalisation de l'invention décrit ci après la transformation élémentaire sur le spectre qui est mise en oeuvre est une compression homothétique de l'échelle des fréquences.In the embodiment of the invention described below the elementary transformation on the spectrum which is implemented is a homothetic compression of the frequency scale.
L'échelle des fréquences est comprimée d'un facteur FacteurSpectre de sorte que les bandes utiles avant et après le traitement soient respectivement égales à [O..FECH/2] et à [O..FECH/(2*FacteurSpectre)], où FECH est la fréquence d'échantillonnage du système.The frequency scale is compressed by a factor Factor Spectrum so that useful bands before and after treatment are respectively equal to [O..FECH / 2] and to [O..FECH / (2 * FacteurSpectre)], where FECH is the sampling frequency of the system.
La mise en oeuvre de cette compression homothétique est très simple lorsque le facteur de compression est une valeur entière. Il suffit alors de remplacer z par zFacteurSpectre dans l'expression du filtre tout pôle de synthèse, puis d'appliquer au signal synthétisé un filtrage passe-bas de fréquence de coupure FECH/(2*FacteurSpectre).The implementation of this homothetic compression is very simple when the compression factor is an integer value. It then suffices to replace z by z FacteurSpectre in the expression of the filter any pole of synthesis, then to apply to the synthesized signal a low-pass filtering of cut-off frequency FECH / (2 * FacteurSpectre).
Une première justification théorique de la validité du procédé décrit ci-dessus consiste à dire que cette opération équivaut à opérer un suréchantillonnage d'un facteur FacteurSpectre de la réponse impulsionnelle du conduit vocal, par insertion de FacteurSpectre-1 échantillons nuls entre chaque échantillons de la réponse impulsionnelle du conduit vocal d'origine puis par filtrage passe-bas du signal synthétisé avec une fréquence de coupure égale à FECHI(2*FacteurSpectre).A first theoretical justification of the validity of the process described above is to say that this operation is equivalent to operating a factor oversampling Factor Impulse response spectrum of the vocal tract, by insertion of FacteurSpectre-1 null samples between each sample of the original vocal tract impulse response then by low-pass filtering of the synthesized signal with a frequency of cutoff equal to FECHI (2 * FactorSpectrum).
Une seconde justification théorique consiste à considérer que cette opération équivaut à dupliquer et déplacer les pôles de la fonction de transfert.A second theoretical justification consists in considering that this operation is equivalent to duplicating and moving the poles of the function of transfer.
En effet, en considérant que les OLPC pôles uniques notés zi=pi.exp(2iπFi) de la fonction de transfert 1 /A(z), les "FacteurSpectre*OLPC pôles" de l/A(ZFacteurSPectre) sont alors les "FacteurSpectre" racines complexes de chacun des zi. Les pôles conservés par l'opération de filtrage passe-bas sont du type z'i = pi I/FacteurSpectre exp(2. i.π.Fi/FacteurSpectre) ce qui montre que leur fréquence de résonance a effectivement subi une compression homothétique d'un facteur "FacteurSpectre".Indeed, considering that the single pole OLPCs denoted zi = pi.exp (2iπFi) of the transfer function 1 / A (z), the "Spectrum Factor * pole OLPC" of l / A (Z FactorSPectre ) are then the " FactorSpectrum "complex roots of each of the zi. The poles retained by the low-pass filtering operation are of the type z'i = p i I / FacteurSpectre exp (2. I.π.Fi / FacteurSpectre) which shows that their resonance frequency has actually undergone homothetic compression a "FactorSpectrum" factor.
Le filtre LPC utilisé en synthèse peut donc s'exprimer sous la
forme :
avec :
Il est possible de restreindre le facteur de compression de
l'enveloppe spectrale à être un entier compris entre 1 et 4 tel que :
La parole restituée à l'étape 7 peut encore être accélérée ou ralentie par simple modification de la durée de l'intervalle de temps pris en compte pour la phase de synthèse.Speech restored in step 7 can be further accelerated or slowed down by simple modification of the duration of the time interval taken into account for the summary phase.
En pratique, cette opération peut avoir lieu en implémentant une
procédure de transformation homothétique définie par la relation :
Si FacteurTemps > 1, alors il s'agit d'un ralentissement de la parole. Si FacteurTemps < 1, alors il s'agit d'une accélération de la parole.If TimeFactor> 1, then it is a slowdown in the word. If TimeFactor <1, then it is an acceleration of speech.
En plus des traitements précédents un certain nombre de post-traitements peuvent être envisagés consistant par exemple à effectuer un filtrage passe bande et une égalisation linéaire du signal synthétisé, ou encore un multiplexage du son sur les deux oreilles.In addition to the previous treatments, a number of post-treatments can be envisaged consisting for example of carrying out a bandpass filtering and linear equalization of the synthesized signal, or still a multiplexing of sound on both ears.
L'opération d'égalisation linéaire à pour objectif de compenser l'audiogramme du patient en amplifiant ou en atténuant certaines bandes de fréquences. Dans le cadre du prototype, le gain à 7 fréquences (0, 125, 250, 500, 1000, 2000 et 4000 Hz) peut être ajusté au cours du temps entre -80 et +10 dB selon les besoins du patient ou les spécificités de son audiogramme. Cette opération peut être réalisée par exemple par filtrage par une transformée de Fourier rapide (FFT) de la manière décrite par exemple dans le livre de M.D Elliott ayant pour titre "Handbook of Digital signal processing" publié en 1987 chez Academic Press.The objective of the linear equalization operation is to compensate the patient's audiogram by amplifying or attenuating certain bands of frequencies. As part of the prototype, the gain at 7 frequencies (0, 125, 250, 500, 1000, 2000 and 4000 Hz) can be adjusted over time between -80 and +10 dB according to the patient's needs or the specifics of his audiogram. This operation can be carried out for example by filtering by a fast Fourier transform (FFT) as described for example in the book of M.D Elliott entitled "Handbook of Digital signal processing" published in 1987 by Academic Press.
L'opération de multiplexage permet une restitution monophonique (par exemple un signal traité seul) ou stéréophonique (par exemple un signal traité sur une voie et un signal non traité sur une autre). La restitution stéréophonique permet au malentendant d'adapter le traitement pour chacune de ses oreilles (deux égaliseurs linéaires pour compenser deux audiogrammes différents par exemple), et de conserver éventuellement intact sur une oreille une forme de signal auquel il est habitué et sur laquelle il peut s'appuyer pour, par exemple, se synchroniser.Multiplexing operation allows monophonic playback (for example a signal processed alone) or stereophonic (for example a signal processed on one channel and an unprocessed signal on another). Restitution allows the hearing impaired to adapt the treatment to each of his ears (two linear equalizers to compensate for two different audiograms for example), and possibly keep intact on one ear a form of signal to which it is accustomed and on which it can be supported to, for example, synchronize.
Le dispositif pour la mise en oeuvre du procédé selon l'invention
qui est représenté à la figure 5 comporte une première voie composée d'un
dispositif d'analyse 8, d'un dispositif de synthèse 9 et d'un premier égaliseur
10 et d'une deuxième voie comportant un deuxième égaliseur 11, l'ensemble
des deux voies étant couplé entre un dispositif de prise de son 13 et une
paire d'écouteurs 12a, 12b .Les dispositifs d'analyse 8 et de synthèse 9
peuvent être mis en oeuvre en empruntant les techniques connues de
réalisation des vocodeurs et notamment celle des vocodeurs HSX précitée.
Les sorties des égaliseurs des deux voies sont multiplexées par un
multiplexeur 14 pour permettre une restitution du son monophonique ou
stéréophonique. Un dispositif de traitement 15 formé par un microprocesseur
ou tout dispositif équivalent est couplé au dispositif de synthèse
9 pour assurer la modification des paramètres fournis par le dispositif
d'analyse 8.The device for implementing the method according to the invention which is represented in FIG. 5 comprises a first channel composed of an analysis device 8, a synthesis device 9 and a
Un dispositif de prétraitement 16 interposé entre le dispositif de
prise de son 13 et chacune des deux voies assure le débruitage et la
conversion du signal de parole en échantillons numériques Les échantillons
numériques débruités sont appliqués respectivement à l'entrée de l'égaliseur
11 et à l'entrée du dispositif d'analyse 8.A
Suivant d'autres modes de réalisation du dispositif selon
l'invention, le dispositif de traitement 15 peut être intégré au dispositif de
synthèse 9, comme il est aussi possible d'intégrer l'ensemble des traitements
d'analyse et de synthèse dans un même logiciel exécutable sur un
ordinateur personnel ou sur un répondeur téléphonique par exemple.According to other embodiments of the device according to
the invention, the
Claims (3)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR9815354A FR2786908B1 (en) | 1998-12-04 | 1998-12-04 | PROCESS AND DEVICE FOR THE PROCESSING OF SOUNDS FOR THE HEARING DISEASE |
FR9815354 | 1998-12-04 |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1006511A1 true EP1006511A1 (en) | 2000-06-07 |
EP1006511B1 EP1006511B1 (en) | 2004-04-28 |
Family
ID=9533606
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP99403027A Expired - Lifetime EP1006511B1 (en) | 1998-12-04 | 1999-12-03 | Sound processing method and device for adapting a hearing aid for hearing impaired |
Country Status (5)
Country | Link |
---|---|
US (1) | US6408273B1 (en) |
EP (1) | EP1006511B1 (en) |
AT (1) | ATE265733T1 (en) |
DE (1) | DE69916756T2 (en) |
FR (1) | FR2786908B1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1173044A2 (en) * | 2000-06-30 | 2002-01-16 | Cochlear Limited | Implantable system for the rehabilitation of a hearing disorder |
WO2003071523A1 (en) * | 2002-02-19 | 2003-08-28 | Qualcomm, Incorporated | Speech converter utilizing preprogrammed voice profiles |
Families Citing this family (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2784218B1 (en) * | 1998-10-06 | 2000-12-08 | Thomson Csf | LOW-SPEED SPEECH CODING METHOD |
US7110951B1 (en) * | 2000-03-03 | 2006-09-19 | Dorothy Lemelson, legal representative | System and method for enhancing speech intelligibility for the hearing impaired |
FR2815457B1 (en) * | 2000-10-18 | 2003-02-14 | Thomson Csf | PROSODY CODING METHOD FOR A VERY LOW-SPEED SPEECH ENCODER |
US6823312B2 (en) * | 2001-01-18 | 2004-11-23 | International Business Machines Corporation | Personalized system for providing improved understandability of received speech |
US6829355B2 (en) * | 2001-03-05 | 2004-12-07 | The United States Of America As Represented By The National Security Agency | Device for and method of one-way cryptographic hashing |
US7660715B1 (en) | 2004-01-12 | 2010-02-09 | Avaya Inc. | Transparent monitoring and intervention to improve automatic adaptation of speech models |
US8306821B2 (en) | 2004-10-26 | 2012-11-06 | Qnx Software Systems Limited | Sub-band periodic signal enhancement system |
US7680652B2 (en) | 2004-10-26 | 2010-03-16 | Qnx Software Systems (Wavemakers), Inc. | Periodic signal enhancement system |
US7610196B2 (en) * | 2004-10-26 | 2009-10-27 | Qnx Software Systems (Wavemakers), Inc. | Periodic signal enhancement system |
US8170879B2 (en) * | 2004-10-26 | 2012-05-01 | Qnx Software Systems Limited | Periodic signal enhancement system |
US7949520B2 (en) * | 2004-10-26 | 2011-05-24 | QNX Software Sytems Co. | Adaptive filter pitch extraction |
US7716046B2 (en) * | 2004-10-26 | 2010-05-11 | Qnx Software Systems (Wavemakers), Inc. | Advanced periodic signal enhancement |
US8543390B2 (en) * | 2004-10-26 | 2013-09-24 | Qnx Software Systems Limited | Multi-channel periodic signal enhancement system |
KR100707339B1 (en) * | 2004-12-23 | 2007-04-13 | 권대훈 | Hearing degree based equalization method and device |
CA2611947C (en) * | 2005-06-27 | 2011-11-01 | Widex A/S | Hearing aid with enhanced high frequency reproduction and method for processing an audio signal |
US7653543B1 (en) * | 2006-03-24 | 2010-01-26 | Avaya Inc. | Automatic signal adjustment based on intelligibility |
US7831420B2 (en) * | 2006-04-04 | 2010-11-09 | Qualcomm Incorporated | Voice modifier for speech processing systems |
DE102006019694B3 (en) * | 2006-04-27 | 2007-10-18 | Siemens Audiologische Technik Gmbh | Hearing aid amplification adjusting method, involves determining maximum amplification or periodical maximum amplification curve in upper frequency range based on open-loop-gain- measurement |
US7962342B1 (en) | 2006-08-22 | 2011-06-14 | Avaya Inc. | Dynamic user interface for the temporarily impaired based on automatic analysis for speech patterns |
US7925508B1 (en) | 2006-08-22 | 2011-04-12 | Avaya Inc. | Detection of extreme hypoglycemia or hyperglycemia based on automatic analysis of speech patterns |
US20080231557A1 (en) * | 2007-03-20 | 2008-09-25 | Leadis Technology, Inc. | Emission control in aged active matrix oled display using voltage ratio or current ratio |
US8041344B1 (en) | 2007-06-26 | 2011-10-18 | Avaya Inc. | Cooling off period prior to sending dependent on user's state |
US8904400B2 (en) * | 2007-09-11 | 2014-12-02 | 2236008 Ontario Inc. | Processing system having a partitioning component for resource partitioning |
US8850154B2 (en) | 2007-09-11 | 2014-09-30 | 2236008 Ontario Inc. | Processing system having memory partitioning |
US8694310B2 (en) | 2007-09-17 | 2014-04-08 | Qnx Software Systems Limited | Remote control server protocol system |
US8209514B2 (en) * | 2008-02-04 | 2012-06-26 | Qnx Software Systems Limited | Media processing system having resource partitioning |
TR201810466T4 (en) * | 2008-08-05 | 2018-08-27 | Fraunhofer Ges Forschung | Apparatus and method for processing an audio signal to improve speech using feature extraction. |
WO2012003602A1 (en) * | 2010-07-09 | 2012-01-12 | 西安交通大学 | Method for reconstructing electronic larynx speech and system thereof |
JP5778778B2 (en) * | 2010-12-08 | 2015-09-16 | ヴェーデクス・アクティーセルスカプ | Hearing aid and improved sound reproduction method |
US9570066B2 (en) * | 2012-07-16 | 2017-02-14 | General Motors Llc | Sender-responsive text-to-speech processing |
US9905240B2 (en) * | 2014-10-20 | 2018-02-27 | Audimax, Llc | Systems, methods, and devices for intelligent speech recognition and processing |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4051331A (en) * | 1976-03-29 | 1977-09-27 | Brigham Young University | Speech coding hearing aid system utilizing formant frequency transformation |
US4791672A (en) * | 1984-10-05 | 1988-12-13 | Audiotone, Inc. | Wearable digital hearing aid and method for improving hearing ability |
WO1996016533A2 (en) * | 1994-11-25 | 1996-06-06 | Fink Fleming K | Method for transforming a speech signal using a pitch manipulator |
US5737719A (en) * | 1995-12-19 | 1998-04-07 | U S West, Inc. | Method and apparatus for enhancement of telephonic speech signals |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2572535B1 (en) | 1984-10-30 | 1986-12-19 | Thomson Csf | SPECTRUM ANALYZER WITH SURFACE WAVE DISPERSITIVE FILTERS |
US5765127A (en) * | 1992-03-18 | 1998-06-09 | Sony Corp | High efficiency encoding method |
WO1999010719A1 (en) * | 1997-08-29 | 1999-03-04 | The Regents Of The University Of California | Method and apparatus for hybrid coding of speech at 4kbps |
-
1998
- 1998-12-04 FR FR9815354A patent/FR2786908B1/en not_active Expired - Fee Related
-
1999
- 1999-12-02 US US09/453,085 patent/US6408273B1/en not_active Expired - Fee Related
- 1999-12-03 DE DE69916756T patent/DE69916756T2/en not_active Expired - Fee Related
- 1999-12-03 AT AT99403027T patent/ATE265733T1/en not_active IP Right Cessation
- 1999-12-03 EP EP99403027A patent/EP1006511B1/en not_active Expired - Lifetime
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4051331A (en) * | 1976-03-29 | 1977-09-27 | Brigham Young University | Speech coding hearing aid system utilizing formant frequency transformation |
US4791672A (en) * | 1984-10-05 | 1988-12-13 | Audiotone, Inc. | Wearable digital hearing aid and method for improving hearing ability |
WO1996016533A2 (en) * | 1994-11-25 | 1996-06-06 | Fink Fleming K | Method for transforming a speech signal using a pitch manipulator |
US5737719A (en) * | 1995-12-19 | 1998-04-07 | U S West, Inc. | Method and apparatus for enhancement of telephonic speech signals |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1173044A2 (en) * | 2000-06-30 | 2002-01-16 | Cochlear Limited | Implantable system for the rehabilitation of a hearing disorder |
EP1173044A3 (en) * | 2000-06-30 | 2005-08-17 | Cochlear Limited | Implantable system for the rehabilitation of a hearing disorder |
US7376563B2 (en) | 2000-06-30 | 2008-05-20 | Cochlear Limited | System for rehabilitation of a hearing disorder |
WO2003071523A1 (en) * | 2002-02-19 | 2003-08-28 | Qualcomm, Incorporated | Speech converter utilizing preprogrammed voice profiles |
US6950799B2 (en) | 2002-02-19 | 2005-09-27 | Qualcomm Inc. | Speech converter utilizing preprogrammed voice profiles |
Also Published As
Publication number | Publication date |
---|---|
DE69916756T2 (en) | 2005-04-07 |
FR2786908A1 (en) | 2000-06-09 |
DE69916756D1 (en) | 2004-06-03 |
EP1006511B1 (en) | 2004-04-28 |
US6408273B1 (en) | 2002-06-18 |
ATE265733T1 (en) | 2004-05-15 |
FR2786908B1 (en) | 2001-06-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1006511B1 (en) | Sound processing method and device for adapting a hearing aid for hearing impaired | |
EP2518724B1 (en) | Microphone/headphone audio headset comprising a means for suppressing noise in a speech signal, in particular for a hands-free telephone system | |
Drullman et al. | Effect of temporal envelope smearing on speech reception | |
EP1250703B1 (en) | Noise reduction apparatus and method | |
EP0745363A1 (en) | Hearing aid having a wavelets-operated cochlear implant | |
US7243060B2 (en) | Single channel sound separation | |
EP2113913B1 (en) | Method and system for reconstituting low frequencies in an audio signal | |
EP0623238B1 (en) | Advanced audio-frequency converter, equipment comprising said converter for treating patients, and method of using said equipment | |
CN111107478B (en) | Sound enhancement method and sound enhancement system | |
Alexander et al. | Effects of WDRC release time and number of channels on output SNR and speech recognition | |
EP0054450B1 (en) | Hearing aid devices | |
WO2006125931A1 (en) | Method of producing a plurality of time signals | |
Nogaki et al. | Effect of training rate on recognition of spectrally shifted speech | |
CH631853A5 (en) | AUDIO-VOICE INTEGRATOR APPARATUS. | |
CA2494697A1 (en) | Audio-intonation calibration method | |
EP0246970B1 (en) | Hearing aid devices | |
Kuk et al. | Improving hearing aid performance in noise: Challenges and strategies | |
FR2695750A1 (en) | Speech signal treatment device for hard of hearing - has speech analyser investigating types of sound-noise, and adjusts signal treatment according to speech type | |
CN100440317C (en) | Voice frequency compression method of digital deaf-aid | |
EP0989544A1 (en) | Device and method for filtering a speech signal, receiver and telephone communications system | |
EP3711307A1 (en) | Method for live public address, in a helmet, taking into account the auditory perception characteristics of the listener | |
Yasu et al. | Critical-band compression method for digital hearing aids | |
WO1998007130A1 (en) | Method and device for teaching languages | |
CA2548949C (en) | Device for treating audio signals, especially for treating audiophonatory disorders | |
Baltzell et al. | Binaural consequences of speech envelope enhancement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE |
|
AX | Request for extension of the european patent |
Free format text: AL;LT;LV;MK;RO;SI |
|
17P | Request for examination filed |
Effective date: 20000816 |
|
AKX | Designation fees paid |
Free format text: AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: THALES |
|
17Q | First examination report despatched |
Effective date: 20030211 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20040428 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT;WARNING: LAPSES OF ITALIAN PATENTS WITH EFFECTIVE DATE BEFORE 2007 MAY HAVE OCCURRED AT ANY TIME BEFORE 2007. THE CORRECT EFFECTIVE DATE MAY BE DIFFERENT FROM THE ONE RECORDED. Effective date: 20040428 Ref country code: IE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20040428 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20040428 Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20040428 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20040428 |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D Free format text: NOT ENGLISH |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D Free format text: FRENCH |
|
REF | Corresponds to: |
Ref document number: 69916756 Country of ref document: DE Date of ref document: 20040603 Kind code of ref document: P |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20040728 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20040728 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20040728 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20040808 |
|
GBT | Gb: translation of ep patent filed (gb section 77(6)(a)/1977) |
Effective date: 20040831 |
|
NLV1 | Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act | ||
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FD4D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20041203 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20041231 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20041231 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20041231 Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20041231 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20050131 |
|
BERE | Be: lapsed |
Owner name: *THALES Effective date: 20041231 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20061129 Year of fee payment: 8 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20061130 Year of fee payment: 8 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20061208 Year of fee payment: 8 |
|
BERE | Be: lapsed |
Owner name: *THALES Effective date: 20041231 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20040928 |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20071203 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20080701 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: ST Effective date: 20081020 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20071203 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20071231 |