EP3797415B1 - Sound processing apparatus and method for sound enhancement - Google Patents
Sound processing apparatus and method for sound enhancement Download PDFInfo
- Publication number
- EP3797415B1 EP3797415B1 EP18752715.5A EP18752715A EP3797415B1 EP 3797415 B1 EP3797415 B1 EP 3797415B1 EP 18752715 A EP18752715 A EP 18752715A EP 3797415 B1 EP3797415 B1 EP 3797415B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- signal
- training
- network
- sound signal
- noise signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L21/0216—Noise filtering characterised by the method used for estimating noise
- G10L21/0232—Processing in the frequency domain
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/27—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
- G10L25/30—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks
Definitions
- the invention relates to the field of sound processing. More specifically, the invention relates to a sound processing apparatus and method for sound, in particular speech enhancement.
- Sound or audio enhancement conventionally uses only a recording of the speech and environment, i.e. noise for producing the enhanced speech audio.
- audio enhancement procedures make use of neural network, such as the speech enhancement procedure described in the article " A Fully Convolutional Neural Network For Speech Enhancement", Se Rim Park and Jinwon Lee, in Proc. Interspeech 2017, August 20-24, 2017, pages 1993-1997, Swiss, Swed en.
- embodiments of the invention are based on the idea to use for a plurality of training sound signals, including a training target signal and a training noise signal, the training noise signal as an additional input for training the neural network of a sound processing apparatus for improving the sound enhancement process.
- the environment recording i.e. the training noise signal can be fed into a dedicated portion of the neural network that outputs an audio environment representation defined, for instance, by a parameter set.
- the environment representation in turn, can be fed as an additional input to another portion of the neural network that produces the enhanced sound.
- the invention can be implemented in hardware and/or software.
- a disclosure in connection with a described method may also hold true for a corresponding device or system configured to perform the method and vice versa.
- a corresponding device may include a unit to perform the described method step, even if such unit is not explicitly described or illustrated in the figures.
- the features of the various exemplary aspects described herein may be combined with each other, unless specifically noted otherwise.
- Figure 1a shows a schematic diagram illustrating an example of processing blocks implemented in a single channel sound processing apparatus 100 according to an embodiment in a training phase
- figure 1b shows a schematic diagram illustrating an example of processing blocks implemented in the single channel sound processing apparatus 100 in an application phase.
- the sound processing apparatus 100 is configured to process a current noisy sound, in particular speech signal comprising a target signal and a current noise signal into an enhanced, i.e. de-noised sound, in particular speech signal.
- the apparatus 100 which could be implemented, for instance, as a loudspeaker, a mobile phone and the like, comprises processing circuitry, in particular one or more processors, configured to provide, i.e. implement an adjustable neural network.
- the adjustable neural network comprises a first neural sub-network 103 and a second neural sub-network 107.
- the first neural sub-network 103 and/or the second neural sub-network 107 (referred to as "Environment Residual Blocks" 103, 107 in the figures) can comprise one or more residual blocks.
- the first neural sub-network 103 and the second neural sub-network 107 can constitute independent, i.e. separate neural networks.
- the neural network, the first neural sub-network 103 and/or the second neural sub-network 107 can comprise one or more convolutional layers. More details about possible implementations of the neural network, the first neural sub-network 103 and/or the second neural sub-network 107 can be found, for instance, in the article " A Fully Convolutional Neural Network For Speech Enhancement", Se Rim Park and Jinwon Lee, in Proc. Interspeech 2017, August 20-24, 2017, pages 1993-1997, Swedish, Swed en.
- the adjustable neural network 103, 107 of the sound processing apparatus 100 is configured to be trained, i.e. conditioned using as a first input a training noise signal (referred to in figure 1a as “Environment Waveform”), as a second input a noisy training sound signal (referred to in figure 1a as “Environment + speech Waveform”) comprising a training target signal and the training noise signal and as a third input the training target signal (referred to in figure 1a as "clean Waveform").
- the training phase involves processing a set of training sound signals comprising a plurality of known training target signals and a plurality of known training noise signals.
- the adjustable neural network 103, 107 of the sound processing apparatus 100 is configured to adjust itself on the basis of the current noise signal and to generate an estimated noise signal on the basis of the sound signal comprising the target signal and the current noise signal.
- the processing circuitry of the sound processing apparatus is further configured to process the sound signal into the enhanced sound signal on the basis of the estimated noise signal.
- the processing unit of the sound processing apparatus 100 is configured to transform the training noise signal, the noisy training sound signal, the training target signal, the current noise signal and the current sound signal from the time domain into the frequency domain by generating a respective log spectrum thereof.
- the blocks 101, 105 and 113 can be configured to perform a short time Fourier transform (STFT) using, for instance, 25 ms frames shifted by 10 ms to extract the spectrum of each signal.
- STFT short time Fourier transform
- the spectrum of the training noise signal (which is provided by block 101 of figure 1 ) is then processed by the first neural sub-network 103.
- the first neural sub-network 103 comprises a sequence of residual blocks.
- a respective residual block comprises two parallel paths.
- the first path can contain two convolutional layers applied one after another, where batch normalization and a rectifiedlinear non-linearity are applied in between the layers.
- the second path can contain the identity function.
- the respective outputs of the two paths can be summed, and a rectifiedlinear non-linearity can be applied.
- the output provided by the first neural sub-network 103 is a representation of the environment associated with a respective training noise signal (referred to as "Environment Embedding" in the figures).
- the first neural sub-network 103 in the training phase (illustrated in figure 1a ), is configured to generate on the basis of the training noise signal provided by block 101 a parameter set, i.e. an environment embedding vector describing the training noise signal and to provide the parameter set to the second neural sub-network 107, wherein the second neural sub-network 107 is configured to adjust itself on the basis of the parameter set provided by the first neural sub-network 103.
- the first neural sub-network 103 is configured to generate on the basis of the current noise signal the environment embedding vector describing the current noise signal and to provide the environment embedding vector to the second neural sub-network 107, wherein the second neural sub-network 107 is configured to adjust itself on the basis of the parameter set provided by the first neural sub-network 103.
- the output of the first neural sub-network 103 i.e. the environment embedding vector describing in the training phase the training noise signal or in the application phase the current noise signal, is used by the second neural sub-network 107 to adjust itself.
- the parameter set defined by the environment embedding vector is used as an additional input by the second neural sub-network 107 such that the output of the second neural sub-network 107 depends on the environment embedding vector, and is "adjusting" to the noise in that sense.
- the second neural sub-network 107 comprises a set of residual blocks, each comprised of two convolutional layers.
- the environment embedding vector is projected (a linear transformation) to a vector with a dimension equal to the number of feature maps in the convolutional layer. Then, the output of this projection is added to every spatial location in the output map of the convolutional layer.
- the adjusted second neural sub-network 107 is configured to generate an estimated training noise signal (referred to as "Enhancement Mask” in figure 1a ) on the basis of the training sound signal provided by block 105.
- the adjusted second neural sub-network 107 is configured to generate an estimated noise signal (referred to as “Enhancement Mask” in figure 1a ) on the basis of the sound signal provided by block 105.
- an enhanced training sound signal (referred to as "Enhanced Speech Spectrum” in figure 1a ) is generated on the basis of the estimated training noise signal provided by the second neural sub-network 107 and the training sound signal provided by block 105. In an embodiment, this can be done by subtracting the estimated training noise signal from the training sound signal or, alternatively, by adding the negative of the estimated training noise signal to the training sound signal.
- an enhanced sound signal (referred to as "Enhanced Speech Spectrum” in figure 1b ) is generated on the basis of the estimated noise signal provided by the second neural sub-network 107 and the sound signal provided by block 105. In an embodiment, this can be done by subtracting the estimated noise signal from the sound signal or, alternatively, by adding the negative of the estimated noise signal to the sound signal.
- the output of block 109 i.e. the enhanced training sound signal
- the output of block 109 is used for training the second neural sub-network 107 by minimizing a difference measure, such as the absolute difference(s), the squared difference(s) and the like, between the training target signal provided by block 113 and the enhanced training sound signal provided by block 109.
- a gradient-based optimization algorithm can be used for training, i.e. optimizing the model parameters of the second neural sub-network 107.
- the processing circuitry of the sound processing apparatus 100 can be further configured to extract phase information from the sound signal comprising the target signal and the current noise signal and to transform the spectrum of the enhanced sound signal back into the time domain on the basis of the extracted phase information.
- the final output of the sound processing apparatus 100 is the enhanced, i.e. de-noised sound signal in the time domain (referred to as "Enhanced Waveform").
- Figures 2a and 2b show a further embodiment of the sound processing apparatus 100 shown in figures 1a and 1b .
- the sound processing apparatus 100 is configured to process multi-channel sound signals.
- the embodiment of the sound processing apparatus 100 shown in figures 2a and 2b and the embodiment of the sound processing apparatus 100 shown in figures 1a and 1b will be described.
- the processing circuitry of the sound processing apparatus 100 can be configured to select a channel of the multichannel sound signal and to process the multi-channel sound signal into the enhanced sound signal on the basis of the estimated noise signal by subtracting (or adding) the estimated noise signal from the selected channel of the multi-channel sound signal.
- the selected channel could be, for instance, the channel closest to the speaker.
- the enhanced spectrum is considered the output of the beamforming procedure in the multichannel setting.
- processing circuitry in block 114 of figure 2b can be configured to select a channel of the multi-channel sound signal and to extract the phase information from the selected channel of the multi-channel sound signal.
- the multiple channels of the noise signal can be used for localizing the sound sources by processing these channels with a time-frequency transformation that is more localized in time, such as a STFT over frames of 10 ms, shifted by 5 ms, or a wavelet transform.
- a time-frequency transformation that is more localized in time, such as a STFT over frames of 10 ms, shifted by 5 ms, or a wavelet transform.
- FIG. 3 shows a flow diagram illustrating an example of a corresponding sound processing method 300 according to an embodiment.
- the method 300 comprises the steps of: providing 301 the adjustable neural network 103, 107; in a training phase 303, training, i.e. conditioning the adjustable neural network 103, 107 using as a first input a training noise signal, as a second input a noisy training sound signal comprising a training target signal and the training noise signal and as a third input the training target signal; and, in an application phase 305, adjusting the neural network 107 on the basis of the current noise signal, generating an estimated noise signal on the basis of the sound signal comprising the target signal and the current noise signal, and processing the sound signal into the enhanced sound signal on the basis of the estimated noise signal.
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Soundproofing, Sound Blocking, And Sound Damping (AREA)
Description
- The invention relates to the field of sound processing. More specifically, the invention relates to a sound processing apparatus and method for sound, in particular speech enhancement.
- Sound or audio enhancement conventionally uses only a recording of the speech and environment, i.e. noise for producing the enhanced speech audio. Often audio enhancement procedures make use of neural network, such as the speech enhancement procedure described in the article "A Fully Convolutional Neural Network For Speech Enhancement", Se Rim Park and Jinwon Lee, in Proc. Interspeech 2017, August 20-24, 2017, pages 1993-1997, Stockholm, Sweden.
- However, given only one recording that contains both the speech and the noise created by the environment, it can be difficult, in particular for a neural network to ascertain which components of an audio signal originate from the environment, which components are the clean speech or sound, i.e. the target signal and which components are just reverberation effects of both the speech and the environment. Additionally, in multichannel settings, audio localization can be performed, but sound enhancement may have difficulties predicting whether a given sound source is to be attributed to the speech or the environment.
- Thus, there is still a need for an improved sound processing apparatus and method allowing for an improved enhancement of a noisy sound signal.
- In Anurag, Kumar et al., 'Speech Enhancement in Multiple Noise Conditions Using Deep Neural Networks' (Interspeech 2016, Vol. 2016, (2016), p. 3738 - 3742) Deep Neuronal Networks training strategies based on psychoacoustic models from speech coding were used for speech enhancement.
- Similarly, in Yong Xu et al., 'Dynamic Noise Aware Training for Speech Enhancement Based on Deep Neural Networks' (Interspeech 2014, (2014), p. 2670 - 2674) Deep Neural Network algorithm based on noise aware training by incorporating noise information, noise type generalization enrichments and global variance equalization is proposed.
- Furthermore, in Choi, J. et al, 'An auditory-based adaptive speech enhancement system by neural network according to noise intensity' (Circuits and Systems, Vol. 2, (200), p. 993 - 996) an additional speech enhancement strategy called lateral inhibition is proposed, capable of estimating noise intensities with the aid of neural networks.
- Additionally, in
US 2018/040333 A1 a method for performing speech enhancement using Deep Neural Networks is suggested that starts training a microphone by target training signals including signal approximation of clean speech. Moreover, estimations of loudspeaker signals are performed based on Acoustic-echo-cancelling signals and the aforementioned microphone signal. - It is an object of the invention to provide an improved sound processing apparatus and method allowing for an improved enhancement of a noisy sound signal.
- The foregoing and other objects are achieved by the subject matter of the independent claims. Further implementation forms are apparent from the dependent claims, the description and the figures.
- Generally, embodiments of the invention are based on the idea to use for a plurality of training sound signals, including a training target signal and a training noise signal, the training noise signal as an additional input for training the neural network of a sound processing apparatus for improving the sound enhancement process. In an embodiment, the environment recording, i.e. the training noise signal can be fed into a dedicated portion of the neural network that outputs an audio environment representation defined, for instance, by a parameter set. The environment representation, in turn, can be fed as an additional input to another portion of the neural network that produces the enhanced sound. By explicitly learning to represent audio environments, and use these representations for enhancement, embodiments of the invention allow to perform efficient speech enhancement in unpredictable audio environments.
- The invention can be implemented in hardware and/or software.
- Further embodiments of the invention will be described with respect to the following figures, wherein:
-
Fig. 1a shows a schematic diagram illustrating an example of processing blocks implemented in a single channel sound processing apparatus according to an embodiment in a training phase; -
Fig. 1b shows a schematic diagram illustrating an example of processing blocks implemented in a single channel sound processing apparatus according to an embodiment in an application phase; -
Fig. 2a shows a schematic diagram illustrating an example of processing blocks implemented in a multi-channel sound processing apparatus according to an embodiment in a training phase; -
Fig. 2b shows a schematic diagram illustrating an example of processing blocks implemented in a multi-channel sound processing apparatus according to an embodiment in an application phase; and -
Fig. 3 shows a flow diagram illustrating an example of a sound processing method according to an embodiment. - In the various figures, identical reference signs will be used for identical or at least functionally equivalent features.
- In the following description, reference is made to the accompanying drawings, which form part of the disclosure, and in which are shown, by way of illustration, specific aspects in which the invention may be placed. It is understood that other aspects may be utilized and structural or logical changes may be made without departing from the scope of the invention. The following detailed description, therefore, is not to be taken in a limiting sense, as the scope of the invention is defined by the appended claims.
- For instance, it is understood that a disclosure in connection with a described method may also hold true for a corresponding device or system configured to perform the method and vice versa. For example, if a specific method step is described, a corresponding device may include a unit to perform the described method step, even if such unit is not explicitly described or illustrated in the figures. Further, it is understood that the features of the various exemplary aspects described herein may be combined with each other, unless specifically noted otherwise.
-
Figure 1a shows a schematic diagram illustrating an example of processing blocks implemented in a single channelsound processing apparatus 100 according to an embodiment in a training phase, whilefigure 1b shows a schematic diagram illustrating an example of processing blocks implemented in the single channelsound processing apparatus 100 in an application phase. - As will be described in more detail further below, the
sound processing apparatus 100 is configured to process a current noisy sound, in particular speech signal comprising a target signal and a current noise signal into an enhanced, i.e. de-noised sound, in particular speech signal. - The
apparatus 100, which could be implemented, for instance, as a loudspeaker, a mobile phone and the like, comprises processing circuitry, in particular one or more processors, configured to provide, i.e. implement an adjustable neural network. In the embodiment shown infigures 1a and1b , the adjustable neural network comprises a firstneural sub-network 103 and a secondneural sub-network 107. In an embodiment, the firstneural sub-network 103 and/or the second neural sub-network 107 (referred to as "Environment Residual Blocks" 103, 107 in the figures) can comprise one or more residual blocks. In further embodiments, the firstneural sub-network 103 and the secondneural sub-network 107 can constitute independent, i.e. separate neural networks. In an embodiment, the neural network, the firstneural sub-network 103 and/or the secondneural sub-network 107 can comprise one or more convolutional layers. More details about possible implementations of the neural network, the firstneural sub-network 103 and/or the secondneural sub-network 107 can be found, for instance, in the article "A Fully Convolutional Neural Network For Speech Enhancement", Se Rim Park and Jinwon Lee, in Proc. Interspeech 2017, August 20-24, 2017, pages 1993-1997, Stockholm, Sweden. - In a training phase, the adjustable
neural network sound processing apparatus 100 is configured to be trained, i.e. conditioned using as a first input a training noise signal (referred to infigure 1a as "Environment Waveform"), as a second input a noisy training sound signal (referred to infigure 1a as "Environment + speech Waveform") comprising a training target signal and the training noise signal and as a third input the training target signal (referred to infigure 1a as "clean Waveform"). Usually, the training phase involves processing a set of training sound signals comprising a plurality of known training target signals and a plurality of known training noise signals. - In an application phase, the adjustable
neural network sound processing apparatus 100 is configured to adjust itself on the basis of the current noise signal and to generate an estimated noise signal on the basis of the sound signal comprising the target signal and the current noise signal. The processing circuitry of the sound processing apparatus is further configured to process the sound signal into the enhanced sound signal on the basis of the estimated noise signal. - As illustrated by
blocks figures 1a and1b , in an embodiment, the processing unit of thesound processing apparatus 100 is configured to transform the training noise signal, the noisy training sound signal, the training target signal, the current noise signal and the current sound signal from the time domain into the frequency domain by generating a respective log spectrum thereof. To this end, theblocks - The spectrum of the training noise signal (which is provided by
block 101 offigure 1 ) is then processed by the firstneural sub-network 103. In an embodiment, the firstneural sub-network 103 comprises a sequence of residual blocks. In an embodiment, a respective residual block comprises two parallel paths. The first path can contain two convolutional layers applied one after another, where batch normalization and a rectifiedlinear non-linearity are applied in between the layers. The second path can contain the identity function. The respective outputs of the two paths can be summed, and a rectifiedlinear non-linearity can be applied. - The output provided by the first
neural sub-network 103 is a representation of the environment associated with a respective training noise signal (referred to as "Environment Embedding" in the figures). Thus, in an embodiment, in the training phase (illustrated infigure 1a ), the firstneural sub-network 103 is configured to generate on the basis of the training noise signal provided by block 101 a parameter set, i.e. an environment embedding vector describing the training noise signal and to provide the parameter set to the secondneural sub-network 107, wherein the secondneural sub-network 107 is configured to adjust itself on the basis of the parameter set provided by the firstneural sub-network 103. Likewise, in the application phase (illustrated infigure 1b ), the firstneural sub-network 103 is configured to generate on the basis of the current noise signal the environment embedding vector describing the current noise signal and to provide the environment embedding vector to the secondneural sub-network 107, wherein the secondneural sub-network 107 is configured to adjust itself on the basis of the parameter set provided by the firstneural sub-network 103. - The output of the first
neural sub-network 103, i.e. the environment embedding vector describing in the training phase the training noise signal or in the application phase the current noise signal, is used by the secondneural sub-network 107 to adjust itself. In other words, the parameter set defined by the environment embedding vector is used as an additional input by the secondneural sub-network 107 such that the output of the secondneural sub-network 107 depends on the environment embedding vector, and is "adjusting" to the noise in that sense. There can be multiple ways for the secondneural sub-network 107 to use this additional input, which also depend on the inner structure of the secondneural sub-network 107. In one embodiment, the secondneural sub-network 107 comprises a set of residual blocks, each comprised of two convolutional layers. For each convolutional layer, the environment embedding vector is projected (a linear transformation) to a vector with a dimension equal to the number of feature maps in the convolutional layer. Then, the output of this projection is added to every spatial location in the output map of the convolutional layer. - In the training phase, the adjusted second
neural sub-network 107 is configured to generate an estimated training noise signal (referred to as "Enhancement Mask" infigure 1a ) on the basis of the training sound signal provided byblock 105. Likewise, in the application phase, the adjusted secondneural sub-network 107 is configured to generate an estimated noise signal (referred to as "Enhancement Mask" infigure 1a ) on the basis of the sound signal provided byblock 105. - In the training phase, in
block 109 of thesound processing apparatus 100 shown infigure 1a an enhanced training sound signal (referred to as "Enhanced Speech Spectrum" infigure 1a ) is generated on the basis of the estimated training noise signal provided by the secondneural sub-network 107 and the training sound signal provided byblock 105. In an embodiment, this can be done by subtracting the estimated training noise signal from the training sound signal or, alternatively, by adding the negative of the estimated training noise signal to the training sound signal. - Likewise, in the application phase, in
block 109 of thesound processing apparatus 100 shown infigure 1b an enhanced sound signal (referred to as "Enhanced Speech Spectrum" infigure 1b ) is generated on the basis of the estimated noise signal provided by the secondneural sub-network 107 and the sound signal provided byblock 105. In an embodiment, this can be done by subtracting the estimated noise signal from the sound signal or, alternatively, by adding the negative of the estimated noise signal to the sound signal. - In the training phase shown in
figure 1a , the output ofblock 109, i.e. the enhanced training sound signal, is used for training the secondneural sub-network 107 by minimizing a difference measure, such as the absolute difference(s), the squared difference(s) and the like, between the training target signal provided byblock 113 and the enhanced training sound signal provided byblock 109. In an embodiment, a gradient-based optimization algorithm can be used for training, i.e. optimizing the model parameters of the secondneural sub-network 107. - In the application phase shown in
figure 1b , in block 112 (referred to as "Waveform Reconstruction" infigure 1b ) the spectrum of the enhanced sound signal provided byblock 109 is transformed back into the time domain. To this end, as illustrated byblock 114 offigure 1b , the processing circuitry of thesound processing apparatus 100 can be further configured to extract phase information from the sound signal comprising the target signal and the current noise signal and to transform the spectrum of the enhanced sound signal back into the time domain on the basis of the extracted phase information. In the application phase, the final output of thesound processing apparatus 100 is the enhanced, i.e. de-noised sound signal in the time domain (referred to as "Enhanced Waveform"). -
Figures 2a and2b show a further embodiment of thesound processing apparatus 100 shown infigures 1a and1b . In the embodiment shown infigures 2a and2b thesound processing apparatus 100 is configured to process multi-channel sound signals. In the following only the main differences between the embodiment of thesound processing apparatus 100 shown infigures 2a and2b and the embodiment of thesound processing apparatus 100 shown infigures 1a and1b will be described. - As can be taken from
figure 2b illustrating the application phase, the processing circuitry of thesound processing apparatus 100 can be configured to select a channel of the multichannel sound signal and to process the multi-channel sound signal into the enhanced sound signal on the basis of the estimated noise signal by subtracting (or adding) the estimated noise signal from the selected channel of the multi-channel sound signal. The selected channel could be, for instance, the channel closest to the speaker. The enhanced spectrum is considered the output of the beamforming procedure in the multichannel setting. - Moreover, the processing circuitry in
block 114 offigure 2b can be configured to select a channel of the multi-channel sound signal and to extract the phase information from the selected channel of the multi-channel sound signal. - The multiple channels of the noise signal can be used for localizing the sound sources by processing these channels with a time-frequency transformation that is more localized in time, such as a STFT over frames of 10 ms, shifted by 5 ms, or a wavelet transform.
-
Figure 3 shows a flow diagram illustrating an example of a correspondingsound processing method 300 according to an embodiment. Themethod 300 comprises the steps of: providing 301 the adjustableneural network training phase 303, training, i.e. conditioning the adjustableneural network application phase 305, adjusting theneural network 107 on the basis of the current noise signal, generating an estimated noise signal on the basis of the sound signal comprising the target signal and the current noise signal, and processing the sound signal into the enhanced sound signal on the basis of the estimated noise signal. - While a particular feature or aspect of the disclosure may have been disclosed with respect to only one of several implementations or embodiments, such feature or aspect may be combined with one or more other features or aspects of the other implementations or embodiments as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms "include", "have", "with", or other variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term "comprise". Also, the terms "exemplary", "for example" and "e.g." are merely meant as an example, rather than the best or optimal. The terms "coupled" and "connected", along with derivatives may have been used. It should be understood that these terms may have been used to indicate that two elements cooperate or interact with each other regardless whether they are in direct physical or electrical contact, or they are not in direct contact with each other.
- Although specific aspects have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that a variety of alternate and/or equivalent implementations may be substituted for the specific aspects shown and described without departing from the scope of the claims.
- Although the elements in the following claims are recited in a particular sequence with corresponding labeling, unless the claim recitations otherwise imply a particular sequence for implementing some or all of those elements, those elements are not necessarily intended to be limited to being implemented in that particular sequence.
- Many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the above teachings. Of course, those skilled in the art readily recognize that there are numerous applications of the invention beyond those described herein. While the invention has been described with reference to one or more particular embodiments, those skilled in the art recognize that many changes may be made thereto without departing from the scope of the invention. It is therefore to be understood that within the scope of the appended claims, the invention may be practiced otherwise than as specifically described herein.
Claims (13)
- A sound processing apparatus (100) configured to process a sound signal comprising a target signal and a current noise signal into an enhanced sound signal, wherein the apparatus (100) characterised in that it comprises:processing circuitry configured to provide an adjustable neural network (103, 107), wherein the adjustable neural network (103, 107) comprises a first neural sub-network (103) and a second neural sub-network (107) and is configured:in a training phase, to be trained using as a first input a training noise signal, as a second input a training sound signal comprising a training target signal and the training noise signal and as a third input the training target signal, andin an application phase, to adjust itself on the basis of the current noise signal and to generate an estimated noise signal on the basis of the sound signal, wherein, in the application phase, the first neural sub-network (103) is configured to generate on the basis of the current noise signal a parameter set describing the current noise signal and to provide the parameter set to the second neural sub-network (107), wherein the second neural sub-network (107) is configured to adjust on the basis of the parameter set provided by the first neural sub-network (103);wherein, in the application phase, the processing circuitry is further configured to process the sound signal into the enhanced sound signal on the basis of the estimated noise signal.
- The apparatus (100) of claim 1, wherein the processing circuitry is configured to transform the training noise signal, the training sound signal and the training target signal from a time domain into a frequency domain and wherein the adjustable neural network is configured, in the training phase, to be trained using the training noise signal, the training sound signal and the training target signal in the frequency domain.
- The apparatus (100) of claim 1 or 2, wherein the processing circuitry is configured to transform the current noise signal and the sound signal from the time domain into the frequency domain, wherein, in the training phase, the adjustable neural network is configured to adjust itself on the basis of the current noise signal in the frequency domain and to generate the estimated noise signal on the basis of the sound signal comprising the target signal and the current noise signal in the frequency domain and wherein the processing circuitry is configured to process the sound signal into the enhanced sound signal in the frequency domain on the basis of the estimated noise signal in the frequency domain.
- The apparatus (100) of claim 3, wherein the processing circuitry is further configured to transform the enhanced sound signal from the frequency domain into the time domain.
- The apparatus (100) of any one of the preceding claims, wherein, in the application phase, the processing circuitry is further configured to extract phase information from the sound signal comprising the target signal and the current noise signal and to process the sound signal into the enhanced sound signal on the basis of the estimated noise signal and the extracted phase information.
- The apparatus (100) of claim 5, wherein the sound signal is a multi-channel sound signal and wherein, in the application phase, the processing circuitry is configured to select a channel of the multichannel sound signal and to extract the phase information from the selected channel of the multi-channel sound signal.
- The apparatus (100) of any one of the preceding claims, wherein, in the training phase, the neural network is further configured to generate an estimated training noise signal on the basis of the training sound signal comprising the training target signal and the training noise signal, to process the training sound signal into an enhanced training sound signal on the basis of the estimated training noise signal and to be trained by minimizing a difference measure between the training target signal and the enhanced training sound signal.
- The apparatus (100) of any one of the preceding claims, wherein, in the application phase, the processing circuitry is configured to process the sound signal into the enhanced sound signal on the basis of the estimated noise signal by subtracting the estimated noise signal from the sound signal.
- The apparatus (100) of any one of the preceding claims, wherein the sound signal is a multichannel sound signal and wherein, in the application phase, the processing circuitry is configured to select a channel of the multi-channel sound signal and to process the multi-channel sound signal into the enhanced sound signal on the basis of the estimated noise signal by subtracting the estimated noise signal from the selected channel of the multi-channel sound signal.
- The apparatus (100) of any one of the preceding claims, wherein in the training phase, the first neural sub-network (103) is configured to generate on the basis of the training noise signal a parameter set describing the training noise signal and to provide the parameter set to the second neural sub-network (107), wherein the second neural sub-network (107) is configured to adjust on the basis of the parameter set provided by the first neural sub-network (103).
- The apparatus (100) of claim 1 or 10, wherein the first neural sub-network (103) and/or the second neural sub-network (107) comprises one or more convolutional layers.
- A sound processing method (300) for processing a sound signal comprising a target signal and a current noise signal into an enhanced sound signal, wherein the method (300) is characterised by comprising:providing (301) an adjustable neural network (103, 107) comprising a first neural sub-network (103) and a second neural sub-network (107);in a training phase (303), training the adjustable neural network (103, 107) using as a first input a training noise signal, as a second input a training sound signal comprising a training target signal and the training noise signal and as a third input the training target signal; andin an application phase (305), adjusting the neural network (103, 107) on the basis of the current noise signal, generating an estimated noise signal on the basis of the sound signal comprising the target signal and the current noise signal, and processing the sound signal into the enhanced sound signal on the basis of the estimated noise signal; wherein, in the application phase, the first neural sub-network (103) is generating on the basis of the current noise signal a parameter set describing the current noise signal and providing the parameter set to the second neural sub-network (107) and wherein the second neural sub-network (107) is adjusting on the basis of the parameter set provided by the first neural sub-network (103).
- A computer program comprising program code for performing the method (300) of claim 12, when executed on a computer or a processor.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2018/071070 WO2020025140A1 (en) | 2018-08-02 | 2018-08-02 | Sound processing apparatus and method for sound enhancement |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3797415A1 EP3797415A1 (en) | 2021-03-31 |
EP3797415B1 true EP3797415B1 (en) | 2024-06-19 |
Family
ID=63165343
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18752715.5A Active EP3797415B1 (en) | 2018-08-02 | 2018-08-02 | Sound processing apparatus and method for sound enhancement |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP3797415B1 (en) |
WO (1) | WO2020025140A1 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111768795B (en) * | 2020-07-09 | 2024-08-30 | 腾讯科技(深圳)有限公司 | Noise suppression method, device, equipment and storage medium for voice signal |
CN111933171B (en) * | 2020-09-21 | 2021-01-22 | 北京达佳互联信息技术有限公司 | Noise reduction method and device, electronic equipment and storage medium |
CN112767908B (en) * | 2020-12-29 | 2024-05-21 | 安克创新科技股份有限公司 | Active noise reduction method based on key voice recognition, electronic equipment and storage medium |
CN113780107B (en) * | 2021-08-24 | 2024-03-01 | 电信科学技术第五研究所有限公司 | Radio signal detection method based on deep learning dual-input network model |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10074380B2 (en) * | 2016-08-03 | 2018-09-11 | Apple Inc. | System and method for performing speech enhancement using a deep neural network-based signal |
-
2018
- 2018-08-02 WO PCT/EP2018/071070 patent/WO2020025140A1/en unknown
- 2018-08-02 EP EP18752715.5A patent/EP3797415B1/en active Active
Also Published As
Publication number | Publication date |
---|---|
EP3797415A1 (en) | 2021-03-31 |
WO2020025140A1 (en) | 2020-02-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10891931B2 (en) | Single-channel, binaural and multi-channel dereverberation | |
EP3797415B1 (en) | Sound processing apparatus and method for sound enhancement | |
US9286908B2 (en) | Method and system for noise reduction | |
Han et al. | Real-time binaural speech separation with preserved spatial cues | |
US9681246B2 (en) | Bionic hearing headset | |
KR101670313B1 (en) | Signal separation system and method for selecting threshold to separate sound source | |
EP2984857B1 (en) | Apparatus and method for center signal scaling and stereophonic enhancement based on a signal-to-downmix ratio | |
US10798511B1 (en) | Processing of audio signals for spatial audio | |
JP2008546012A (en) | System and method for decomposition and modification of audio signals | |
Marquardt et al. | Interaural coherence preservation for binaural noise reduction using partial noise estimation and spectral postfiltering | |
JP4462617B2 (en) | Sound source separation device, sound source separation program, and sound source separation method | |
KR101944758B1 (en) | An audio signal processing apparatus and method for modifying a stereo image of a stereo signal | |
JP2007047427A (en) | Audio processing device | |
EP3566229A1 (en) | An apparatus and method for enhancing a wanted component in a signal | |
JP2010217268A (en) | Low delay signal processor generating signal for both ears enabling perception of direction of sound source | |
KR20100130328A (en) | Single Channel Speech Separation Using CAAS and Soft Mask Algorithm | |
Tammen et al. | Combining binaural LCMP beamforming and deep multi-frame filtering for joint dereverberation and interferer reduction in the Clarity-2021 challenge | |
Feng et al. | Recurrent neural network-based estimation and correction of relative transfer function for preserving spatial cues in speech separation | |
Feng et al. | Preservation of interaural level difference cue in a deep learning-based speech separation system for bilateral and bimodal cochlear implants users | |
Bahadornia et al. | Enhanced binaural source separation using wavelet denoising technique | |
JP6832095B2 (en) | Channel number converter and its program | |
Liang et al. | Multiband sound source localization algorithm for directional enhancement in hearing aids | |
Marin-Hurtado et al. | Preservation of localization cues in BSS-based noise reduction: Application in binaural hearing aids | |
Hensiba et al. | Hybrid Masking Algorithm for Universal Hearing Aid System | |
Shamsoddini et al. | Enhancement of speech by suppression of interference |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20201221 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: JIN, WENYU Inventor name: SETIAWAN, PANJI Inventor name: KEREN, GIL Inventor name: HAN, JING Inventor name: SCHULLER, BJOERN Inventor name: GROSCHE, PETER |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20220624 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20240201 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602018070770 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20240709 Year of fee payment: 7 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG9D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240920 |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20240619 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240919 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240920 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240919 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1696463 Country of ref document: AT Kind code of ref document: T Effective date: 20240619 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20241021 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20241021 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20241019 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20241019 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602018070770 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20240802 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20240831 Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240619 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed |
Effective date: 20250320 |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20240919 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20240919 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20240831 |