US8583428B2 - Sound source separation using spatial filtering and regularization phases - Google Patents
Sound source separation using spatial filtering and regularization phases Download PDFInfo
- Publication number
- US8583428B2 US8583428B2 US12/815,408 US81540810A US8583428B2 US 8583428 B2 US8583428 B2 US 8583428B2 US 81540810 A US81540810 A US 81540810A US 8583428 B2 US8583428 B2 US 8583428B2
- Authority
- US
- United States
- Prior art keywords
- signals
- separated
- spatially filtered
- separation
- spatial filtering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000001914 filtration Methods 0.000 title claims abstract description 25
- 238000000926 separation method Methods 0.000 title claims description 36
- 238000012880 independent component analysis Methods 0.000 claims abstract description 39
- 238000000034 method Methods 0.000 claims abstract description 25
- 230000007246 mechanism Effects 0.000 claims abstract description 17
- 230000005236 sound signal Effects 0.000 claims abstract description 8
- 238000012545 processing Methods 0.000 claims description 22
- 230000004807 localization Effects 0.000 claims description 9
- 230000004044 response Effects 0.000 claims description 8
- 230000001131 transforming effect Effects 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 2
- 230000008569 process Effects 0.000 abstract description 6
- 238000005516 engineering process Methods 0.000 description 10
- 239000011159 matrix material Substances 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 239000013598 vector Substances 0.000 description 6
- 230000006855 networking Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000006872 improvement Effects 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000002093 peripheral effect Effects 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 230000005055 memory storage Effects 0.000 description 3
- 238000012805 post-processing Methods 0.000 description 3
- 208000001992 Autosomal Dominant Optic Atrophy Diseases 0.000 description 2
- 206010011906 Death Diseases 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000007723 transport mechanism Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0272—Voice signal separating
- G10L21/028—Voice signal separating using properties of sound source
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/005—Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L21/0216—Noise filtering characterised by the method used for estimating noise
- G10L2021/02161—Number of inputs available containing the signal or the noise to be suppressed
- G10L2021/02166—Microphone arrays; Beamforming
Definitions
- Speech separation which refers to simultaneous capture and separation of human voices by audio processing, is desirable in many such scenarios.
- Sound source separation is generally similar, except that not all captured sounds need be speech.
- sound source separation can be used as a speech or other sound enhancement technique, such as to separate the desired speech or sounds from undesired signals such as noise or ambient speech.
- sound source separation may facilitate voice control of multimedia equipment, for example, in which the voice control commands from one or more speakers are received in various acoustic environments (e.g., with differing noise levels and reverberation conditions).
- Sound source/speech separation may be accomplished via a beamformer, which uses spatial separation of the sources to separately weigh the signals from an array of microphones, and thereby amplify/boost signals received from different directions differently.
- a nullformer operates similarly, but nulls/suppresses interferences based on such spatial information. Beamformers are relatively simple, converge quickly, and are robust, however they are somewhat imprecise and do not separate interfering signals as well in a real world situation where reflections of the interfering source come from many different angles.
- Sound source/speech separation also may be accomplished by independent component analysis. This technique is based on statistical independence, and works by maximizing non-Gaussianity or mutual independence of sound signals. While independent component analysis can result in a high degree of separation, because it has many parameters independent component analysis is more difficult to converge and can provide bad results; indeed, independent component analysis depends more on the initial conditions, because it takes a while to learn the coefficients, and the sources may have moved in that timeframe.
- various aspects of the subject matter described herein are directed towards a technology by which sound, such as speech from two or more speakers, is separated into separated signals by a multiple phase process/system that combines spatial filtering with regularization in a manner that provides significant improvements over other sound separation techniques.
- Audio signals received at a microphone array are transforming into frequency domain signals, such as via a modulated complex lapped transform, or Fourier transform, or any other suitable transformation to frequency domain.
- the frequency domain signals are processed into separated spatially filtered signals in the spatial filtering phase, including by inputting the signals into a plurality of beamformers (which may include nullformers).
- the outputs of the beamformers may be fed into nonlinear spatial filters to output the spatially filtered signals.
- the separated spatially filtered signals are input into an independent component analysis mechanism that is configured with multi-tap filters corresponding to previous input frames (instead of only using only a current frame for instantaneous demixing).
- the separated outputs of the independent component analysis mechanism may be fed into secondary nonlinear spatial filters to output separated spatially filtered and regularized signals.
- Each of the separated spatially filtered and regularized signals into separated audio signals are then inverse-transformed into separated audio signals.
- FIG. 1 is a block diagram representing components for sound separation in a subband domain.
- FIG. 2 is a flow diagram representing a two-phase sound separation system, including spatial filtering and regularized feed-forward independent component analysis.
- FIG. 3 is a representation of a matrix computed for a frequency beam that uses multi-tap filtering based on previous frames for speech separation.
- FIG. 4 shows an illustrative example of a computing environment into which various aspects of the present invention may be incorporated.
- Various aspects of the technology described herein are generally directed towards combining beamforming/nullforming/spatial filtering and/or an independent component analysis algorithm in a way that significantly improves sound/speech separation.
- a feed-forward network that includes independent component analysis in the subband domain to maximize the mutual independence of separated current frames, using the information from current and previous multi-channel frames of microphone array signals, including after processing via beamforming/nullforming/spatial filtering.
- the technology described herein generally has the advantages of beamforming and independent component analysis without their disadvantages, including that the final results can be as robust as a beamformer while approaching the separation of independent component analysis. For example, by initializing independent component analysis with the beamformer values, initialization is not an issue. Further, the values of independent component analysis coefficients may be regularized to beamformer values, thereby making the system more robust to moving sources and shorter time windows for estimation.
- any of the examples herein are non-limiting.
- any audio separation including non-speech may use the technology described herein, as may other non-audio frequencies and/or technologies, e.g., sonar, radio frequencies and so forth.
- the present invention is not limited to any particular embodiments, aspects, concepts, structures, functionalities or examples described herein. Rather, any of the embodiments, aspects, concepts, structures, functionalities or examples described herein are non-limiting, and the present invention may be used in various ways that provide benefits and advantages in computing and audio processing in general.
- FIG. 1 shows a block diagram of regularized feed-forward independent component analysis (ICA) with instantaneous direction of arrival (IDOA) based post-processing.
- ICA feed-forward independent component analysis
- IDOA instantaneous direction of arrival
- FIG. 1 two independent speech sources 102 and 103 are separated in the subband domain.
- the time-domain signals captured using an array of multiple sensors (e.g., microphones) 104 are converted to the subband domain, in this example by using a modulated complex lapped transform (MCLT, blocks 106 ) that produces improved separation between frequency bands in an efficient manner.
- MCLT modulated complex lapped transform
- any other suitable transform may be used, e.g., FFT.
- the resulting signals may be converted back into the time domain using inverse MCLT (IMCLT), as represented by blocks 120 and 121 .
- IMCLT inverse MCLT
- beamformers may be time invariant, with weights computed offline, or adaptive, with weights computed as conditions change.
- One such adaptive beamformer is the minimum variance distortionless response (MVDR) beamformer, which in the frequency domain can be described as:
- W H D H ⁇ R n - 1 D H ⁇ R n - 1 ⁇ D ( 2 )
- D is a steering vector
- R n is a noise covariance matrix
- W is a weights matrix.
- R is the covariance matrix of the input (signal plus noise). This is generally more convenient as it avoids using a voice activity detector; such a beamformer is known as minimum power distortionless response (MPDR).
- MPDR minimum power distortionless response
- a regularization term is added to the sample covariance matrix. In one implementation, an additional null constraint is also added with the direction to the interference.
- D t and D i are steering vectors toward the target and interference direction respectively, and ⁇ is the regularization term for diagonal loading.
- FIG. 2 shows an example block diagram of a two phase mechanism for one subband.
- the first phase comprises spatial filtering, which separates the sound sources by their positions.
- Signals from the microphone array 204 are transformed by a suitable transform 206 (MCLT is shown as an example).
- a linear adaptive beamformer (MVDR or MPDR), combined with enforced nullformers is used for signal representation, as represented by blocks 208 and 209 .
- MVDR or MPDR linear adaptive beamformer
- nonlinear spatial filtering blocks 210 and 211
- the nonlinear spatial filters comprise instantaneous direction of arrival (IDOA) based spatial filters, such as described in the aforementioned published U.S. Pat. Appl. no. 20080288219.
- IDOA instantaneous direction of arrival
- the output of the spatial filtering above is used for regularization by the second phase of the exemplified two-stage processing scheme.
- the second phase comprises a feed-forward ICA 214 , which is a modification of a known ICA algorithm, with the modification based upon using multi-tap filters.
- the duration of the reverberation process is typically longer than a current frame, and thus using multi-tap filters that contain historical information over previous frames allows for the ICA to consider the duration of the reverberation process.
- ten multi-tap filters corresponding to ten previous 30 ms frames may be used with a 300 ms reverberation duration, whereby equation (1) corresponds to the matrix generally represented in FIG. 3 , where n represents the current frame. This is only one example, and shorter frames with correspondingly more taps have been implemented.
- the mutual independence of the separated speeches is maximized by using both current and previous multi-channel frames, (multiple taps).
- secondary spatial filters 215 and 216 are applied on the ICA outputs, which are followed by the inverse MCLT 220 and 221 to provide the separated speech signals. In general, this removes any residual interference.
- the output of the second phase comprises separated signals at a second level of separation that is typically a significant improvement over prior techniques, e.g., as measured by signal-to-interference ratios.
- IDOA instantaneous DOA
- the sound source localizer provides directions to desired ⁇ 1 and interference ⁇ 2 signals. Given the proper estimation on the DOAs for the target and interference speech signals, the constrained beamformer plus nullformer according is applied as described in equation (3).
- the consequent spatial filter applies a time-varying real gain for each subband, acting as a spatio-temporal filter for suppressing the sounds coming from non-look directions.
- the suppression gain is computed as:
- G k ( n ) ⁇ ⁇ 1 - ⁇ ⁇ ⁇ ⁇ 1 + ⁇ ⁇ ⁇ ⁇ ⁇ p k ⁇ ( ⁇ ) ⁇ d ⁇ / ⁇ - ⁇ + ⁇ ⁇ p k ⁇ ( ⁇ ) ⁇ d ⁇ , ( 4 ) where ⁇ is the range around the desired direction ⁇ 1 from which to capture the sound.
- equation (5) is performed iteratively for each frequency beam.
- the iteration may be done on the order of dozens to a thousand times, depending on available resources. In practice, reasonable results have been obtained with significantly fewer than a thousand iterations.
- the first tap of RFFICA for the reference channels is initialized as a pseudo-inversion of the steering vector stack for one implementation so that one can be assigned to the target direction and null to the interference direction: W 0,ini
- ref ([ e ( ⁇ t )
- ⁇ is set to 0.5 just to penalize the larger deviation from the first stage output.
- g ( X ) tanh h (
- X represents the phase of the complex value X.
- the technology described herein thus overcomes limitations of the subband domain ICA in a reverberant acoustic environment, and also increases the super-Gaussianity of the separated speech signals.
- the feed-forward demixing filter structure with several taps in the subband domain is accommodated with natural gradient update rules.
- the estimated spatial information on the target and interference may be used in combination with a regularization term added on the update equation, thus minimizing mean squared error between separated output signals and the outputs of spatial filters.
- FIG. 4 illustrates an example of a suitable computing and networking environment 400 on which the examples of FIGS. 1-3 may be implemented.
- the computing system environment 400 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing environment 400 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment 400 .
- the invention is operational with numerous other general purpose or special purpose computing system environments or configurations.
- Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to: personal computers, server computers, hand-held or laptop devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
- the invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer.
- program modules include routines, programs, objects, components, data structures, and so forth, which perform particular tasks or implement particular abstract data types.
- the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
- program modules may be located in local and/or remote computer storage media including memory storage devices.
- an exemplary system for implementing various aspects of the invention may include a general purpose computing device in the form of a computer 410 .
- Components of the computer 410 may include, but are not limited to, a processing unit 420 , a system memory 430 , and a system bus 421 that couples various system components including the system memory to the processing unit 420 .
- the system bus 421 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
- such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.
- ISA Industry Standard Architecture
- MCA Micro Channel Architecture
- EISA Enhanced ISA
- VESA Video Electronics Standards Association
- PCI Peripheral Component Interconnect
- the computer 410 typically includes a variety of computer-readable media.
- Computer-readable media can be any available media that can be accessed by the computer 410 and includes both volatile and nonvolatile media, and removable and non-removable media.
- Computer-readable media may comprise computer storage media and communication media.
- Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data.
- Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by the computer 410 .
- Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
- modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above may also be included within the scope of computer-readable media.
- the system memory 430 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 431 and random access memory (RAM) 432 .
- ROM read only memory
- RAM random access memory
- BIOS basic input/output system 433
- RAM 432 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 420 .
- FIG. 4 illustrates operating system 434 , application programs 435 , other program modules 436 and program data 437 .
- the computer 410 may also include other removable/non-removable, volatile/nonvolatile computer storage media.
- FIG. 4 illustrates a hard disk drive 441 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 451 that reads from or writes to a removable, nonvolatile magnetic disk 452 , and an optical disk drive 455 that reads from or writes to a removable, nonvolatile optical disk 456 such as a CD ROM or other optical media.
- removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like.
- the hard disk drive 441 is typically connected to the system bus 421 through a non-removable memory interface such as interface 440
- magnetic disk drive 451 and optical disk drive 455 are typically connected to the system bus 421 by a removable memory interface, such as interface 450 .
- the drives and their associated computer storage media provide storage of computer-readable instructions, data structures, program modules and other data for the computer 410 .
- hard disk drive 441 is illustrated as storing operating system 444 , application programs 445 , other program modules 446 and program data 447 .
- operating system 444 application programs 445 , other program modules 446 and program data 447 are given different numbers herein to illustrate that, at a minimum, they are different copies.
- a user may enter commands and information into the computer 410 through input devices such as a tablet, or electronic digitizer, 464 , a microphone 463 , a keyboard 462 and pointing device 461 , commonly referred to as mouse, trackball or touch pad.
- Other input devices not shown in FIG. 4 may include a joystick, game pad, satellite dish, scanner, or the like.
- These and other input devices are often connected to the processing unit 420 through a user input interface 460 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB).
- a monitor 491 or other type of display device is also connected to the system bus 421 via an interface, such as a video interface 490 .
- the monitor 491 may also be integrated with a touch-screen panel or the like. Note that the monitor and/or touch screen panel can be physically coupled to a housing in which the computing device 410 is incorporated, such as in a tablet-type personal computer. In addition, computers such as the computing device 410 may also include other peripheral output devices such as speakers 495 and printer 496 , which may be connected through an output peripheral interface 494 or the like.
- the computer 410 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 480 .
- the remote computer 480 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 410 , although only a memory storage device 481 has been illustrated in FIG. 4 .
- the logical connections depicted in FIG. 4 include one or more local area networks (LAN) 471 and one or more wide area networks (WAN) 473 , but may also include other networks.
- LAN local area network
- WAN wide area network
- Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
- the computer 410 When used in a LAN networking environment, the computer 410 is connected to the LAN 471 through a network interface or adapter 470 .
- the computer 410 When used in a WAN networking environment, the computer 410 typically includes a modem 472 or other means for establishing communications over the WAN 473 , such as the Internet.
- the modem 472 which may be internal or external, may be connected to the system bus 421 via the user input interface 460 or other appropriate mechanism.
- a wireless networking component such as comprising an interface and antenna may be coupled through a suitable device such as an access point or peer computer to a WAN or LAN.
- program modules depicted relative to the computer 410 may be stored in the remote memory storage device.
- FIG. 4 illustrates remote application programs 485 as residing on memory device 481 . It may be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
- An auxiliary subsystem 499 (e.g., for auxiliary display of content) may be connected via the user interface 460 to allow data such as program content, system status and event notifications to be provided to the user, even if the main portions of the computer system are in a low power state.
- the auxiliary subsystem 499 may be connected to the modem 472 and/or network interface 470 to allow communication between these systems while the main processing unit 420 is in a low power state.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
Description
S=WY (1)
where S is the separated speech vector, W is the demixing matrix, and Y is the measured speech vector in a reverberant and noisy environment.
where D is a steering vector, Rn is a noise covariance matrix, and W is a weights matrix. Often the noise only covariance Rn is replaced by R, which is the covariance matrix of the input (signal plus noise). This is generally more convenient as it avoids using a voice activity detector; such a beamformer is known as minimum power distortionless response (MPDR). To prevent instability due to the direction of arrival mismatch, a regularization term is added to the sample covariance matrix. In one implementation, an additional null constraint is also added with the direction to the interference. The beamformer with the extra nullforming constraint may be formulated as:
W H=[1 0]([D t |D i]H [R+λI] −1 [D t |D i])−1 [D t |D i]H [R+λI] −1 (3)
where Dt and Di are steering vectors toward the target and interference direction respectively, and λ is the regularization term for diagonal loading. With the beam on the target and null on the interference directions, the first-tap of the feed-forward ICA filter may be initialized for appropriate channel assignment.
where Δθ is the range around the desired direction θ1 from which to capture the sound.
W i =W i+μ((1−α)·ΔICA,i−α·ΔFirst stage,i) (5)
where i=0, 1, . . . , N−1, N is the number of taps. ΔICA,i and ΔFirst stage,i represent the portion of the ICA update and the regularized portion on the first stage output.
where · t represents time averaging, (·−i) represents i sample delay, SFirst stage is the first stage output vector for regularization and |Ref represents the reference channels. A penalty term is only applied to the channel where the references are assigned; the other entries for the mixing matrix are set to zero so that the penalty term vanishes on those channel updates.
W i−=exp(−βi)·I (10)
where I is an identity matrix, β is selected to model the average reverberation time, and i is the tap index. Note that the first tap of RFFICA for the reference channels is initialized as a pseudo-inversion of the steering vector stack for one implementation so that one can be assigned to the target direction and null to the interference direction:
W 0,ini|ref=([e(θt)|e(θi)]H [e(θt)|e(θi)])−1 [e(θt)|e(θiθ]H· (11)
Because the initialized filter is updated using ICA, a slight mismatch with actual DOA may be adjusted in an updating procedure. In one implementation, α is set to 0.5 just to penalize the larger deviation from the first stage output. As a nonlinear function g(·), a polar-coordinate based tangent hyperbolic function is used, suitable to the super-Gaussian sources with a good convergence property:
g(X)=tanh h(|X|)exp(j X) (12)
where X represents the phase of the complex value X. To deal with the permutation and scaling, the steered response of the converged first tap demixing filter is used:
where l is the designated channel number, Fl is the steered response for the channel output, F is the steered response to the candidate DOAs. To penalize the non-look direction in the scaling process, nonlinear attenuation is added with the normalization using the steered response. In one implementation, γ is set as one (1). The spatial filter also penalizes on the non-look directional sources in each frequency bin.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/815,408 US8583428B2 (en) | 2010-06-15 | 2010-06-15 | Sound source separation using spatial filtering and regularization phases |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/815,408 US8583428B2 (en) | 2010-06-15 | 2010-06-15 | Sound source separation using spatial filtering and regularization phases |
Publications (2)
Publication Number | Publication Date |
---|---|
US20110307251A1 US20110307251A1 (en) | 2011-12-15 |
US8583428B2 true US8583428B2 (en) | 2013-11-12 |
Family
ID=45096929
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/815,408 Active 2031-10-19 US8583428B2 (en) | 2010-06-15 | 2010-06-15 | Sound source separation using spatial filtering and regularization phases |
Country Status (1)
Country | Link |
---|---|
US (1) | US8583428B2 (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130253923A1 (en) * | 2012-03-21 | 2013-09-26 | Her Majesty The Queen In Right Of Canada, As Represented By The Minister Of Industry | Multichannel enhancement system for preserving spatial cues |
US9204214B2 (en) | 2007-04-13 | 2015-12-01 | Personics Holdings, Llc | Method and device for voice operated control |
US9270244B2 (en) | 2013-03-13 | 2016-02-23 | Personics Holdings, Llc | System and method to detect close voice sources and automatically enhance situation awareness |
US9271077B2 (en) | 2013-12-17 | 2016-02-23 | Personics Holdings, Llc | Method and system for directional enhancement of sound using small microphone arrays |
US20160111113A1 (en) * | 2013-06-03 | 2016-04-21 | Samsung Electronics Co., Ltd. | Speech enhancement method and apparatus for same |
US20160277862A1 (en) * | 2015-03-20 | 2016-09-22 | Northwestern Polytechnical University | Multistage minimum variance distortionless response beamformer |
US9626970B2 (en) | 2014-12-19 | 2017-04-18 | Dolby Laboratories Licensing Corporation | Speaker identification using spatial information |
RU170249U1 (en) * | 2016-09-02 | 2017-04-18 | Общество с ограниченной ответственностью ЛЕКСИ (ООО ЛЕКСИ) | DEVICE FOR TEMPERATURE-INVARIANT AUDIO-VISUAL VOICE SOURCE LOCALIZATION |
US9706280B2 (en) | 2007-04-13 | 2017-07-11 | Personics Holdings, Llc | Method and device for voice operated control |
US10405082B2 (en) | 2017-10-23 | 2019-09-03 | Staton Techiya, Llc | Automatic keyword pass-through system |
US10438588B2 (en) * | 2017-09-12 | 2019-10-08 | Intel Corporation | Simultaneous multi-user audio signal recognition and processing for far field audio |
US10553196B1 (en) | 2018-11-06 | 2020-02-04 | Michael A. Stewart | Directional noise-cancelling and sound detection system and method for sound targeted hearing and imaging |
US10667069B2 (en) | 2016-08-31 | 2020-05-26 | Dolby Laboratories Licensing Corporation | Source separation for reverberant environment |
US11217237B2 (en) | 2008-04-14 | 2022-01-04 | Staton Techiya, Llc | Method and device for voice operated control |
US11317202B2 (en) | 2007-04-13 | 2022-04-26 | Staton Techiya, Llc | Method and device for voice operated control |
US11349206B1 (en) | 2021-07-28 | 2022-05-31 | King Abdulaziz University | Robust linearly constrained minimum power (LCMP) beamformer with limited snapshots |
US11610587B2 (en) | 2008-09-22 | 2023-03-21 | Staton Techiya Llc | Personalized sound management and method |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101251045B1 (en) * | 2009-07-28 | 2013-04-04 | 한국전자통신연구원 | Apparatus and method for audio signal discrimination |
US9100734B2 (en) * | 2010-10-22 | 2015-08-04 | Qualcomm Incorporated | Systems, methods, apparatus, and computer-readable media for far-field multi-source tracking and separation |
TWI456516B (en) * | 2010-12-17 | 2014-10-11 | Univ Nat Chiao Tung | Independent component analysis processor |
US9360546B2 (en) | 2012-04-13 | 2016-06-07 | Qualcomm Incorporated | Systems, methods, and apparatus for indicating direction of arrival |
US9881616B2 (en) * | 2012-06-06 | 2018-01-30 | Qualcomm Incorporated | Method and systems having improved speech recognition |
US9538285B2 (en) * | 2012-06-22 | 2017-01-03 | Verisilicon Holdings Co., Ltd. | Real-time microphone array with robust beamformer and postfilter for speech enhancement and method of operation thereof |
US9443532B2 (en) * | 2012-07-23 | 2016-09-13 | Qsound Labs, Inc. | Noise reduction using direction-of-arrival information |
US9460732B2 (en) * | 2013-02-13 | 2016-10-04 | Analog Devices, Inc. | Signal source separation |
US9596437B2 (en) | 2013-08-21 | 2017-03-14 | Microsoft Technology Licensing, Llc | Audio focusing via multiple microphones |
US9420368B2 (en) | 2013-09-24 | 2016-08-16 | Analog Devices, Inc. | Time-frequency directional processing of audio signals |
JP2015155975A (en) * | 2014-02-20 | 2015-08-27 | ソニー株式会社 | Sound signal processor, sound signal processing method, and program |
JP6485711B2 (en) * | 2014-04-16 | 2019-03-20 | ソニー株式会社 | Sound field reproduction apparatus and method, and program |
JP6118838B2 (en) * | 2014-08-21 | 2017-04-19 | 本田技研工業株式会社 | Information processing apparatus, information processing system, information processing method, and information processing program |
KR102470962B1 (en) * | 2014-09-05 | 2022-11-24 | 인터디지털 매디슨 페턴트 홀딩스 에스에이에스 | Method and apparatus for enhancing sound sources |
US9525934B2 (en) * | 2014-12-31 | 2016-12-20 | Stmicroelectronics Asia Pacific Pte Ltd. | Steering vector estimation for minimum variance distortionless response (MVDR) beamforming circuits, systems, and methods |
US10535361B2 (en) * | 2017-10-19 | 2020-01-14 | Kardome Technology Ltd. | Speech enhancement using clustering of cues |
RU2680735C1 (en) * | 2018-10-15 | 2019-02-26 | Акционерное общество "Концерн "Созвездие" | Method of separation of speech and pauses by analysis of the values of phases of frequency components of noise and signal |
US11258560B2 (en) * | 2018-11-14 | 2022-02-22 | Mediatek Inc. | Physical downlink control channel (PDCCH) transmission and reception with multiple transmission points |
RU2700189C1 (en) * | 2019-01-16 | 2019-09-13 | Акционерное общество "Концерн "Созвездие" | Method of separating speech and speech-like noise by analyzing values of energy and phases of frequency components of signal and noise |
EP3939035A4 (en) | 2019-03-10 | 2022-11-02 | Kardome Technology Ltd. | LANGUAGE IMPROVEMENT USING CLUSTERING OF HINTS |
CN111207897B (en) * | 2020-02-23 | 2021-12-17 | 西安理工大学 | Local nonlinear factor positioning detection method based on nonlinear separation |
CN112285641B (en) * | 2020-09-16 | 2023-12-29 | 西安空间无线电技术研究所 | ICA-based DOA (direction of arrival) estimation method and device |
CN113506582B (en) * | 2021-05-25 | 2024-07-09 | 北京小米移动软件有限公司 | Voice signal identification method, device and system |
CN118314870B (en) * | 2024-06-11 | 2024-08-20 | 山东鑫林纸制品有限公司 | Noise control system in paper product production process |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5999567A (en) * | 1996-10-31 | 1999-12-07 | Motorola, Inc. | Method for recovering a source signal from a composite signal and apparatus therefor |
US20010037195A1 (en) * | 2000-04-26 | 2001-11-01 | Alejandro Acero | Sound source separation using convolutional mixing and a priori sound source knowledge |
US6424960B1 (en) * | 1999-10-14 | 2002-07-23 | The Salk Institute For Biological Studies | Unsupervised adaptation and classification of multiple classes and sources in blind signal separation |
US6563803B1 (en) * | 1997-11-26 | 2003-05-13 | Qualcomm Incorporated | Acoustic echo canceller |
US20030179888A1 (en) * | 2002-03-05 | 2003-09-25 | Burnett Gregory C. | Voice activity detection (VAD) devices and methods for use with noise suppression systems |
US20050018836A1 (en) * | 2003-07-23 | 2005-01-27 | Mitel Networks Corporation | Method to reduce acoustic coupling in audio conferencing systems |
US7099821B2 (en) * | 2003-09-12 | 2006-08-29 | Softmax, Inc. | Separation of target acoustic signals in a multi-transducer arrangement |
US20070021958A1 (en) * | 2005-07-22 | 2007-01-25 | Erik Visser | Robust separation of speech signals in a noisy environment |
US20080027714A1 (en) | 2006-07-28 | 2008-01-31 | Kabushiki Kaisha Kobe Seiko Sho | Sound source separation apparatus and sound source separation method |
US20080306739A1 (en) | 2007-06-08 | 2008-12-11 | Honda Motor Co., Ltd. | Sound source separation system |
US7970564B2 (en) * | 2006-05-02 | 2011-06-28 | Qualcomm Incorporated | Enhancement techniques for blind source separation (BSS) |
US8005237B2 (en) * | 2007-05-17 | 2011-08-23 | Microsoft Corp. | Sensor array beamformer post-processor |
US20120072210A1 (en) * | 2009-03-25 | 2012-03-22 | Kabushiki Kaisha Toshiba | Signal processing method, apparatus and program |
US8175871B2 (en) * | 2007-09-28 | 2012-05-08 | Qualcomm Incorporated | Apparatus and method of noise and echo reduction in multiple microphone audio systems |
US20120120218A1 (en) * | 2010-11-15 | 2012-05-17 | Flaks Jason S | Semi-private communication in open environments |
US8223988B2 (en) * | 2008-01-29 | 2012-07-17 | Qualcomm Incorporated | Enhanced blind source separation algorithm for highly correlated mixtures |
US8447595B2 (en) * | 2010-06-03 | 2013-05-21 | Apple Inc. | Echo-related decisions on automatic gain control of uplink speech signal in a communications device |
-
2010
- 2010-06-15 US US12/815,408 patent/US8583428B2/en active Active
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5999567A (en) * | 1996-10-31 | 1999-12-07 | Motorola, Inc. | Method for recovering a source signal from a composite signal and apparatus therefor |
US6563803B1 (en) * | 1997-11-26 | 2003-05-13 | Qualcomm Incorporated | Acoustic echo canceller |
US6424960B1 (en) * | 1999-10-14 | 2002-07-23 | The Salk Institute For Biological Studies | Unsupervised adaptation and classification of multiple classes and sources in blind signal separation |
US20010037195A1 (en) * | 2000-04-26 | 2001-11-01 | Alejandro Acero | Sound source separation using convolutional mixing and a priori sound source knowledge |
US7047189B2 (en) | 2000-04-26 | 2006-05-16 | Microsoft Corporation | Sound source separation using convolutional mixing and a priori sound source knowledge |
US20030179888A1 (en) * | 2002-03-05 | 2003-09-25 | Burnett Gregory C. | Voice activity detection (VAD) devices and methods for use with noise suppression systems |
US20050018836A1 (en) * | 2003-07-23 | 2005-01-27 | Mitel Networks Corporation | Method to reduce acoustic coupling in audio conferencing systems |
US7099821B2 (en) * | 2003-09-12 | 2006-08-29 | Softmax, Inc. | Separation of target acoustic signals in a multi-transducer arrangement |
US20070021958A1 (en) * | 2005-07-22 | 2007-01-25 | Erik Visser | Robust separation of speech signals in a noisy environment |
US7970564B2 (en) * | 2006-05-02 | 2011-06-28 | Qualcomm Incorporated | Enhancement techniques for blind source separation (BSS) |
US20080027714A1 (en) | 2006-07-28 | 2008-01-31 | Kabushiki Kaisha Kobe Seiko Sho | Sound source separation apparatus and sound source separation method |
US8005237B2 (en) * | 2007-05-17 | 2011-08-23 | Microsoft Corp. | Sensor array beamformer post-processor |
US20080306739A1 (en) | 2007-06-08 | 2008-12-11 | Honda Motor Co., Ltd. | Sound source separation system |
US8175871B2 (en) * | 2007-09-28 | 2012-05-08 | Qualcomm Incorporated | Apparatus and method of noise and echo reduction in multiple microphone audio systems |
US8223988B2 (en) * | 2008-01-29 | 2012-07-17 | Qualcomm Incorporated | Enhanced blind source separation algorithm for highly correlated mixtures |
US20120072210A1 (en) * | 2009-03-25 | 2012-03-22 | Kabushiki Kaisha Toshiba | Signal processing method, apparatus and program |
US8447595B2 (en) * | 2010-06-03 | 2013-05-21 | Apple Inc. | Echo-related decisions on automatic gain control of uplink speech signal in a communications device |
US20120120218A1 (en) * | 2010-11-15 | 2012-05-17 | Flaks Jason S | Semi-private communication in open environments |
Non-Patent Citations (25)
Title |
---|
Araki, et al., "The Fundamental Limitation of Frequency Domain Blind Source Separation for Convolutive Mixtures of Speech", Retrieved at <<http://ieeexplore.ieee.org//stamp/stamp.jsp?tp=&arnumber=01193577>>, IEEE Transactions on Speech and Audio Processing, vol. 11, No. 2, Mar. 2003, pp. 109-116. |
Araki, et al., "The Fundamental Limitation of Frequency Domain Blind Source Separation for Convolutive Mixtures of Speech", Retrieved at >, IEEE Transactions on Speech and Audio Processing, vol. 11, No. 2, Mar. 2003, pp. 109-116. |
Asano, F. ;Ikeda, S. ; Ogawa, M. ; Asoh, H. ; Kitawaki, N. , Combined approach of array processing and independent component analysis for blind separation of acoustic signals , May 2003, IEEE Transactions on Speech and Audio Processing, vol. 11;Issue: 3, pp. 204-215. * |
Beerends, et al., "Perceptual evaluation of speech quality (PESQ), an objective method for end-to-end speech quality assessment of narrowband telephone network and speech codecs", Retrieved at <<http://www.mp3-tech.org/programmer/docs/2001-P03b.pdf>>, Journal of the Audio Engineering Society, Oct. 2002, pp. 1-27. |
Beerends, et al., "Perceptual evaluation of speech quality (PESQ), an objective method for end-to-end speech quality assessment of narrowband telephone network and speech codecs", Retrieved at >, Journal of the Audio Engineering Society, Oct. 2002, pp. 1-27. |
Dhir C.S.;Park H.; Lee S., Directionally Constrained Filterbank ICA, Aug. 2007, Signal Processing Letters IEEE, vol. 14; Issue 8, pp. 541-544. * |
Drake, et al., "Sound Source Separation via Computational Auditory Scene Analysis Enhanced beam forming", Retrieved at <<http://ivpl.eecs.northwestern.edu/system/files/01191040.pdf>>, In Proceedings of the 2nd IEEE Sensor Array and Multichannel Signal Processing Workshop, Aug. 2002, pp. 259-263. |
Drake, et al., "Sound Source Separation via Computational Auditory Scene Analysis Enhanced beam forming", Retrieved at >, In Proceedings of the 2nd IEEE Sensor Array and Multichannel Signal Processing Workshop, Aug. 2002, pp. 259-263. |
Günel, et al., "Blind Source Separation and Directional Audio Synthesis for Binaural Auralization of Multiple Sound Sources using Microphone Array Recordings", Retrieved at <<http://scitation.aip.org/getpdf/servlet/GetPDFServlet?filetype=pdf&id=PMARCW000004000001060001000001&idtype=cvips&prog=normal>>, Proceedings of Meetings on Acoustics, vol. 4, Aug. 6, 2008, pp. 1-7. |
Kolossa D.; Orglmeister R., Nonlinear Postprocessing for Blind Speech Separation, 2004, Proc. ICA'2004, pp. 832-839 2004. * |
Malvar, "A Modulation Complex Lapped Transform and Its Applications to Audio Processing", 1999. * |
Malvar, Henrique., "A modulated complex lapped transform and its application to audio processing", Retrieved at <<http://research.microsoft.com/pubs/69702/tr-99-27.pdf>>, Technical Report, MSR-TR-99-27, May 1999, pp. 1-9. |
Malvar, Henrique., "A modulated complex lapped transform and its application to audio processing", Retrieved at >, Technical Report, MSR-TR-99-27, May 1999, pp. 1-9. |
Saruwatari, et al., "Blind Source Separation Based on a Fast-Convergence Algo-rithm Combining ICA and Beamforming", Retrieved at <<http://www.iesk.ovgu.de/iniesk-media/bilder/ks/publications/pickings/eurospeech-2001-aalborg/page2603.pdf>>, Eurospeech, 2001, pp. 4. |
Sawada, et al., "Polar Coordinate Based Nonlinear Function for Frequency-Domain Blind Source Separation", Retrieved at <<http://www.tara.tsukuba.ac.jp/˜maki/reprint/Sawada/hs02icassp1001-1004.pdf >>, In Proceedings of ICASSP, 2002, pp. 1001-1004. |
Sawada, et al., "Polar Coordinate Based Nonlinear Function for Frequency-Domain Blind Source Separation", Retrieved at >, In Proceedings of ICASSP, 2002, pp. 1001-1004. |
Seltzer, et al., "Microphone Array Post-Filter using Incremental Bayes Learning to Track the Spatial Distributions of Speech and Noise", Retrieved at <<http://thamakau.usc.edu/Proceedings/ICASSP%202007/pdfs/0100029.pdf >>, Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, Apr. 15-20, 2007, pp. 29-32. |
Seltzer, et al., "Microphone Array Post-Filter using Incremental Bayes Learning to Track the Spatial Distributions of Speech and Noise", Retrieved at >, Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, Apr. 15-20, 2007, pp. 29-32. |
Valin, et al., "Robust 3d Localization and Tracking of Sound Sources using Beam forming and Particle Filtering", Retrieved at <<http://people.xiph.org/˜jm/papers/valin—icassp2006.pdf>>, In Proceedings International Conference on Audio, Speech and Signal Processing, 2006, pp. 4. |
Valin, et al., "Robust 3d Localization and Tracking of Sound Sources using Beam forming and Particle Filtering", Retrieved at >, In Proceedings International Conference on Audio, Speech and Signal Processing, 2006, pp. 4. |
Valin, et al., "Robust Recognition of Simultaneous Speech by a Mobile Robot", Retrieved at <<http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4285864&isnumber=4285839>>, IEEE Transactions on Robotics, vol. 23, No. 4, Aug. 2007, pp. 742-752 |
Valin, et al., "Robust Recognition of Simultaneous Speech by a Mobile Robot", Retrieved at >, IEEE Transactions on Robotics, vol. 23, No. 4, Aug. 2007, pp. 742-752 |
Virtanen, Tuomas., "Sound Source Separation in Monaural Music Signals", Retrieved at <<http://www.cs.tut.fi/sgn/arg/music/tuomasv/virtanen—phd.pdf>>, Ph.D. dissertation, 2006, pp. 134. |
Virtanen, Tuomas., "Sound Source Separation in Monaural Music Signals", Retrieved at >, Ph.D. dissertation, 2006, pp. 134. |
Wang C.; Brandstein M.S., Multi-source face tracking with audio and visual data, 1999, 1999 IEEE 3rd Workshop, pp. 169-174. * |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10382853B2 (en) | 2007-04-13 | 2019-08-13 | Staton Techiya, Llc | Method and device for voice operated control |
US9706280B2 (en) | 2007-04-13 | 2017-07-11 | Personics Holdings, Llc | Method and device for voice operated control |
US10631087B2 (en) | 2007-04-13 | 2020-04-21 | Staton Techiya, Llc | Method and device for voice operated control |
US10129624B2 (en) | 2007-04-13 | 2018-11-13 | Staton Techiya, Llc | Method and device for voice operated control |
US11317202B2 (en) | 2007-04-13 | 2022-04-26 | Staton Techiya, Llc | Method and device for voice operated control |
US12249326B2 (en) | 2007-04-13 | 2025-03-11 | St Case1Tech, Llc | Method and device for voice operated control |
US10051365B2 (en) | 2007-04-13 | 2018-08-14 | Staton Techiya, Llc | Method and device for voice operated control |
US9204214B2 (en) | 2007-04-13 | 2015-12-01 | Personics Holdings, Llc | Method and device for voice operated control |
US11217237B2 (en) | 2008-04-14 | 2022-01-04 | Staton Techiya, Llc | Method and device for voice operated control |
US11610587B2 (en) | 2008-09-22 | 2023-03-21 | Staton Techiya Llc | Personalized sound management and method |
US20130253923A1 (en) * | 2012-03-21 | 2013-09-26 | Her Majesty The Queen In Right Of Canada, As Represented By The Minister Of Industry | Multichannel enhancement system for preserving spatial cues |
US9270244B2 (en) | 2013-03-13 | 2016-02-23 | Personics Holdings, Llc | System and method to detect close voice sources and automatically enhance situation awareness |
US20160111113A1 (en) * | 2013-06-03 | 2016-04-21 | Samsung Electronics Co., Ltd. | Speech enhancement method and apparatus for same |
US10431241B2 (en) * | 2013-06-03 | 2019-10-01 | Samsung Electronics Co., Ltd. | Speech enhancement method and apparatus for same |
US11043231B2 (en) | 2013-06-03 | 2021-06-22 | Samsung Electronics Co., Ltd. | Speech enhancement method and apparatus for same |
US10529360B2 (en) | 2013-06-03 | 2020-01-07 | Samsung Electronics Co., Ltd. | Speech enhancement method and apparatus for same |
US9271077B2 (en) | 2013-12-17 | 2016-02-23 | Personics Holdings, Llc | Method and system for directional enhancement of sound using small microphone arrays |
US9626970B2 (en) | 2014-12-19 | 2017-04-18 | Dolby Laboratories Licensing Corporation | Speaker identification using spatial information |
US9560463B2 (en) * | 2015-03-20 | 2017-01-31 | Northwestern Polytechnical University | Multistage minimum variance distortionless response beamformer |
US20160277862A1 (en) * | 2015-03-20 | 2016-09-22 | Northwestern Polytechnical University | Multistage minimum variance distortionless response beamformer |
US10667069B2 (en) | 2016-08-31 | 2020-05-26 | Dolby Laboratories Licensing Corporation | Source separation for reverberant environment |
US10904688B2 (en) | 2016-08-31 | 2021-01-26 | Dolby Laboratories Licensing Corporation | Source separation for reverberant environment |
RU170249U1 (en) * | 2016-09-02 | 2017-04-18 | Общество с ограниченной ответственностью ЛЕКСИ (ООО ЛЕКСИ) | DEVICE FOR TEMPERATURE-INVARIANT AUDIO-VISUAL VOICE SOURCE LOCALIZATION |
US10438588B2 (en) * | 2017-09-12 | 2019-10-08 | Intel Corporation | Simultaneous multi-user audio signal recognition and processing for far field audio |
US10966015B2 (en) | 2017-10-23 | 2021-03-30 | Staton Techiya, Llc | Automatic keyword pass-through system |
US10405082B2 (en) | 2017-10-23 | 2019-09-03 | Staton Techiya, Llc | Automatic keyword pass-through system |
US11432065B2 (en) | 2017-10-23 | 2022-08-30 | Staton Techiya, Llc | Automatic keyword pass-through system |
US10553196B1 (en) | 2018-11-06 | 2020-02-04 | Michael A. Stewart | Directional noise-cancelling and sound detection system and method for sound targeted hearing and imaging |
US11349206B1 (en) | 2021-07-28 | 2022-05-31 | King Abdulaziz University | Robust linearly constrained minimum power (LCMP) beamformer with limited snapshots |
Also Published As
Publication number | Publication date |
---|---|
US20110307251A1 (en) | 2011-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8583428B2 (en) | Sound source separation using spatial filtering and regularization phases | |
US10123113B2 (en) | Selective audio source enhancement | |
US10490204B2 (en) | Method and system of acoustic dereverberation factoring the actual non-ideal acoustic environment | |
US10038795B2 (en) | Robust acoustic echo cancellation for loosely paired devices based on semi-blind multichannel demixing | |
Gannot et al. | A consolidated perspective on multimicrophone speech enhancement and source separation | |
US8363850B2 (en) | Audio signal processing method and apparatus for the same | |
US8401206B2 (en) | Adaptive beamformer using a log domain optimization criterion | |
US8223988B2 (en) | Enhanced blind source separation algorithm for highly correlated mixtures | |
US9008329B1 (en) | Noise reduction using multi-feature cluster tracker | |
US9570087B2 (en) | Single channel suppression of interfering sources | |
US7626889B2 (en) | Sensor array post-filter for tracking spatial distributions of signals and noise | |
US9485574B2 (en) | Spatial interference suppression using dual-microphone arrays | |
US8880396B1 (en) | Spectrum reconstruction for automatic speech recognition | |
CN109473118B (en) | Dual-channel speech enhancement method and device | |
WO2019080553A1 (en) | Microphone array-based target voice acquisition method and device | |
EP2562752A1 (en) | Sound source separator device, sound source separator method, and program | |
TW200849219A (en) | Systems, methods, and apparatus for signal separation | |
Roman et al. | Binaural segregation in multisource reverberant environments | |
CN110660404B (en) | Voice communication and interactive application system and method based on null filtering preprocessing | |
CN111681665A (en) | Omnidirectional noise reduction method, equipment and storage medium | |
Wang et al. | Combining superdirective beamforming and frequency-domain blind source separation for highly reverberant signals | |
WO2022256577A1 (en) | A method of speech enhancement and a mobile computing device implementing the method | |
Pertilä | Online blind speech separation using multiple acoustic speaker tracking and time–frequency masking | |
CN118899005B (en) | Audio signal processing method, device, computer equipment and storage medium | |
Dam et al. | Blind signal separation using steepest descent method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICROSOFT CORPORATION, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TASHEV, IVAN;KIM, LAE-HOON;ACERO, ALEJANDRO;AND OTHERS;SIGNING DATES FROM 20100602 TO 20100607;REEL/FRAME:024533/0766 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034544/0001 Effective date: 20141014 |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |