[go: up one dir, main page]

CN117214814B - Cross-correlation sound source DOA estimation method based on noise angle spectral subtraction and electronic equipment - Google Patents

Cross-correlation sound source DOA estimation method based on noise angle spectral subtraction and electronic equipment Download PDF

Info

Publication number
CN117214814B
CN117214814B CN202311170984.3A CN202311170984A CN117214814B CN 117214814 B CN117214814 B CN 117214814B CN 202311170984 A CN202311170984 A CN 202311170984A CN 117214814 B CN117214814 B CN 117214814B
Authority
CN
China
Prior art keywords
microphone
noise
sound source
frequency
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311170984.3A
Other languages
Chinese (zh)
Other versions
CN117214814A (en
Inventor
梅琳
李童
曾瑜
张琳
陈晓旭
段志强
刘美丽
王中悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHONGQING SPECIAL EQUIPMENT INSPECTION AND RESEARCH INSTITUTE
Original Assignee
CHONGQING SPECIAL EQUIPMENT INSPECTION AND RESEARCH INSTITUTE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHONGQING SPECIAL EQUIPMENT INSPECTION AND RESEARCH INSTITUTE filed Critical CHONGQING SPECIAL EQUIPMENT INSPECTION AND RESEARCH INSTITUTE
Priority to CN202311170984.3A priority Critical patent/CN117214814B/en
Publication of CN117214814A publication Critical patent/CN117214814A/en
Application granted granted Critical
Publication of CN117214814B publication Critical patent/CN117214814B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The invention relates to the technical field of three-dimensional multi-sound-source localization of unmanned aerial vehicles, and discloses a cross-correlation sound source DOA estimation method and electronic equipment based on noise angle spectrum subtraction, wherein the method comprises the following steps: microphone location modeling and cross-correlation based DOA estimation including estimating TDOA and noise reduction using angular spectra; in the noise reduction, the angular spectrum of the unmanned aerial vehicle noise is subtracted from the angular spectrum of the mixed signal, and then the microphone signal is averaged. The invention can solve the problem that the unmanned aerial vehicle positioning frequently fails in the low signal-to-noise ratio environment.

Description

Cross-correlation sound source DOA estimation method based on noise angle spectral subtraction and electronic equipment
Technical Field
The invention relates to the technical field of three-dimensional multi-sound-source positioning of unmanned aerial vehicles, in particular to a cross-correlation sound source DOA estimation method based on noise angle spectral subtraction and electronic equipment.
Background
Sound source localization is an important function of unmanned aerial vehicles, especially in the case of video occlusion. The problem with using unmanned aerial vehicles to locate sound sources is an emerging field in recent years related to the use of smart audio in military and civilian environments, including search and rescue tasks, package delivery, health departments, and surveillance. However, a major challenge in using a drone for sound source localization is identifying the direction of the target sound source in the event that the drone itself generates noise. In order to enhance the practical application, it is necessary to suppress noise emitted from the unmanned aerial vehicle. The direction of arrival (DOA), azimuth and elevation of a ground sound source are estimated using an array of embedded microphones on the drone.
The unmanned aerial vehicle sound test system is generally composed of an onboard microphone and a loudspeaker array, and supports the functions of sound source positioning, sound source separation, sound source tracking, ground sound source voice enhancement and the like. However, the accuracy of sound source localization affects the subsequent functionality of the unmanned aerial vehicle listening system. Therefore, the effectiveness of the system is largely dependent on the localization information of the sound source. In recent years, sound source localization on unmanned aerial vehicles adopts an embedded microphone array and a camera, and audio information and video information are respectively utilized, so that better localization accuracy is achieved. Sometimes, however, due to unfavorable lighting and environmental conditions, vision may be obscured, and thus audio information may be the only contributor to perceived sound sources. The on-board microphone captures mainly sound from unmanned noise sources, such as motors and propellers, that are closer to the array than the ground or other sources. Thus, any sound from the ground is masked by the drone noise, resulting in an extremely low signal-to-noise ratio (SdNR) (defined as the power ratio between the source signal and the drone noise) level. Typically, the spectrum of unmanned noise consists of tones or harmonics and a wideband. Unmanned noise harmonics stem from motor rotation, broadband noise stems from turbulence and propeller blade airflow. In general, the noise distribution of a drone depends on two intrinsic characteristics, such as the current through the motor or motor speed, the phase difference between the propeller blades and the flight mode, and extrinsic characteristics, such as pressure, humidity and wind speed conditions. The unmanned aerial vehicle embedded microphone array sound dataset needs to take into account different intrinsic and extrinsic features. For example, an unmanned aerial vehicle embedded dataset containing rotor speeds for different flight configurations, and observe the relationship between spectral harmonic components and motor speeds. Literature [B.Yen and Y.Hioka,"Noise power spectral density scaled SNR response estimation with restricted range search for sound source localisation usingunmanned aerial vehicles,"EURASIP J.Audio,Speech,Music Process.,no.1,pp.1–26,Dec.2020.] proposes an unmanned aerial vehicle embedded recording system employing a denoising self-encoder and a peak-limiting search post-processing algorithm. In practice, however, DOA estimation is extremely challenging due to the dynamics of unmanned noise, and often fails at lower SNR conditions (e.g., -30 dB).
Disclosure of Invention
In view of the above, the invention aims to provide a cross-correlation sound source DOA estimation method based on noise angle spectral subtraction and electronic equipment, which can solve the problem that unmanned aerial vehicle positioning frequently fails in a low signal-to-noise ratio environment.
The invention solves the technical problems by the following technical means:
In a first aspect, the invention discloses a cross-correlation sound source DOA estimation method based on noise angle spectral subtraction, which comprises the following steps: microphone location modeling and cross-correlation based DOA estimation including estimating TDOA and noise reduction using angular spectra; in the noise reduction, the angular spectrum of the unmanned aerial vehicle noise is subtracted from the angular spectrum of the mixed signal, and then the microphone signal is averaged.
Further, the microphone position modeling step includes:
S101, setting an origin with respect to a spherical coordinate system according to Q microphones attached to the unmanned aerial vehicle, wherein the position x q≡(rqqq of each microphone is q=1,..q, Q represents the number of microphones, r q represents a radius, θ q represents an elevation angle, and Φ q represents an azimuth angle;
S102, setting L active sites at far field of microphone array For the following Is provided withRepresenting unmanned related impulse responses from an xth source to a kth microphone, and obtaining signals received by the qth th microphones;
s103, assuming that a sound source is in the far field of the unmanned aerial vehicle, ignoring scattering of a sound source signal to the unmanned aerial vehicle structure; then, the first step Unmanned related transfer function modeling between sound source and q th microphone.
Further, in the step 102, the signal received by the q th th microphone is given by equation 1
Wherein,Is fromThe sound signal of the sound source, v m,q (t), is the unmanned noise caused by the M-th motor on the q th -th microphone, M is the number of motors on the unmanned, x is the convolution operator;
The short-time fourier transform in equation 1 is:
In the method, in the process of the invention, And V m,q (n, k) are respectivelyAnd v m,q (t),Is thatN and K are time frame and frequency region indices, respectively, N e { 1., N, K e { 1..k }, where N and K are the number of time frames and frequency bins, respectively.
Further, in the step 103, the first stepThe unmanned related transfer function between the sound source and the q th th microphone is modeled as
Where f k denotes the frequency corresponding to the frequency domain of k th and k th is the slaveThe time difference between the origin of the signal reached by the source and the q th th microphone.
Further, the estimating TDOA step using the angular spectrum includes:
S201, let P q(n,k)=|Pq|exp(iΦq (n, k)), where |p q | and Φ q (n, k) are the amplitude and phase of the q th th microphone signal, respectively;
S202, within the search boundaries of theta epsilon [ -pi/2, pi/2 ] and phi epsilon (-pi, pi ], considering a uniform grid of potential angles in three-dimensional space;
s203, set the microphone pair index associated with channels q, q' to J, where j=1,..j, where J Is the total number of different pairs in the array; let τ j (θ, φ) be the TDOA of the j th -th pair of microphones, obtain the instantaneous angular spectrum of the j th -th pair of microphones in (n, k) th time-frequency domains;
S204, respectively weighting the frequency domain and the time domain of the microphone signal to combine the frequency spectrum, the time and the space information of the microphone signal; the frequency k and the time n of the j-th pair of microphones are averaged to obtain the peak value of the first th sound sources as a TDOA result.
Further, in the step S202, TDOA between the two microphone channels q and q' of the sound source propagating in the free field is calculated as:
In the method, in the process of the invention, Is the unit vector in the direction of sound propagation, is the dot product, and c is the sound propagation velocity.
Further, in the step S203, the method of the (n, k) th time-frequency-domain instantaneous angular spectrum of the jth th microphone pair is as follows:
Ψ(n,k,τj)=exp(iΦq(n,k)-iΦq′(n,k))=Ψ(n,k,τj)=exp(iΦq(n,k)-iΦq′(n,k)) 4. The method is to
Where exp (-i 2 pi f kτj (θ, phi)) is the inter-channel phase difference calculated theoretically for the j th th microphone pair with respect to the delay τ j (θ, phi).
Further, in the step S204, the frequency k and the time n of the j-th pair of microphones are averaged to obtain the peak value of the first th sound sources as the TDOA, which is specifically:
further, in the noise reduction, the angular spectrum of the sound source is expressed as
Wherein, ψ (·) mixture and ψ (·) drone are respectively the instantaneous angular spectrums of the mixed signal and the unmanned aerial vehicle noise, and k 1,k2 is respectively the lower and upper ranges of the voice frequency band; because the noise harmonic wave of the unmanned aerial vehicle is dominant in the low frequency range, frequency summation is carried out in the whole voice bandwidth range; first, theThe TDOA estimates for the individual sound sources are given by the peak response:
In a second aspect, the present invention also discloses an electronic device, where the electronic device includes a processor and a memory coupled to each other, where the memory stores a computer program, and when the computer program is executed by the processor, causes the electronic device to execute the method described above.
The invention has the beneficial effects that:
1. The invention uses a mixed signal DOA estimation method based on cross correlation to blur sound source localization, considers the instantaneous angular spectrum of the mixed signal and the noise of each time-frequency domain, and deduces the DOA of the source signal in different microphone pairs; before weighting the microphone pairs, the unmanned aerial vehicle noise is restrained by subtracting the angular spectrum of the specific unmanned aerial vehicle noise in the signals from the mixed signals, so that the problem that unmanned aerial vehicle positioning frequently fails in a low signal-to-noise ratio environment in the prior art is solved.
2. The invention uses the arrival Time Difference (TDOA) of different microphone pairs to carry out noise angle spectrum subtraction, and realizes the noise suppression of multichannel recording through the actually measured current ratio noise spectrum.
3. The DOA estimation method has good DOA estimation performance and low complexity, and is very suitable for real-time application; in addition, the method is suitable for measuring the position of the sound source in the three-dimensional space under any array geometric shape, and a plurality of sound sources under a certain azimuth distance can be positioned with acceptable precision.
Drawings
FIG. 1 is a diagram of an irregular array of microphones on a drone with multiple sound sources;
fig. 2 is an explanatory diagram of a microphone signal calculation transient angular spectrum of the unmanned aerial vehicle.
Detailed Description
The present application will be described in detail below with reference to the drawings and the specific embodiments, wherein like or similar parts are designated by the same reference numerals throughout the drawings or the description, and implementations not shown or described in the drawings are in a form well known to those of ordinary skill in the art. In the description of the present application, the terms "first," "second," and the like are used merely to distinguish between descriptions and are not to be construed as indicating or implying relative importance.
Example 1,
The cross-correlation sound source DOA estimation method based on noise angle spectral subtraction in this embodiment includes the following steps:
S1, modeling microphone position
Considering an irregular array of Q microphones attached to a drone, with respect to one properly selected origin of the spherical coordinate system, where the position of each microphone x q≡(rqqq), q=1. Set L active sites at far field of microphone arrayFor the followingAs shown in fig. 1. Is provided withRepresenting the unmanned relevant impulse response from the xth source to the kth microphone. The signal received by the q th th microphone is given by
Wherein the method comprises the steps ofIs fromThe sound signal of the sound source, v m,q (t), is the unmanned noise caused by the M-th motor on the q th -th microphone, M is the number of motors on the unmanned, and x is the convolution operator.
The above Short Time Fourier Transform (STFT) can be written as
In the method, in the process of the invention,And V m,q (n, k) are respectivelyAnd v m,q (t),Is thatN and K are time frame and frequency region indices, respectively, N e { 1., N, K e { 1..k }, where N and K are the number of time frames and frequency bins, respectively.
Assuming that the sound source is in the far field of the unmanned aerial vehicle, scattering of the unmanned aerial vehicle structure by the sound source signal is ignored. Then we can put the firstThe unmanned related transfer function between the sound source and the q th th microphone is modeled as
Where f k denotes the frequency corresponding to the frequency domain of k th and k th is a slaveThe time difference between the origin of the signal reached by the source and the q th th microphone.
S2, DOA estimation based on cross correlation
S201, estimating TDOA by using angular spectrum
Let P q(n,k)=|Pq|exp(iΦq (n, k)), where |p q | and Φ q (n, k) are the amplitude and phase of the q th th microphone signal, respectively.
Wherein (·) represents a complex conjugate operator. R q,q′ (n, k) represents that only the phase difference of the q th th and q' th th signals is reserved.
Within the search boundaries of θ∈ [ -pi/2, pi/2 ] and φ∈ (-pi, pi ], we consider a uniform grid of potential angles in three-dimensional space, we consider from any angle (θ, phi) calculates the TDOA between the two microphone channels q and q' of the sound source propagating in the free field as follows
In the method, in the process of the invention,Is a unit vector in the sound propagation direction, is a dot product, and c is the sound propagation speed. Let the microphone pair associated with channels q, q' index J, where j=1,..j, where JIs the total number of different pairs in the array.
Let τ j (θ, φ) be the TDOA of the j th th transducer. As shown in FIG. 2, we obtain the instantaneous angular spectrum of the (n, k) th time-frequency domains for the jth th microphone pair
Ψ(n,k,τj)=exp(iΦq(n,k)-iΦq′(n,k))=Ψ(n,k,τj)
=exp(iΦq(n,k)-iΦq′(n,k))
Where exp (-i 2 pi f kτj (θ, phi)) is the inter-channel phase difference calculated theoretically for the j th th microphone pair with respect to the delay τ j (θ, phi).
The spectral, temporal and spatial information of the microphone signal are combined by weighting the frequency domain and the time domain of the microphone signal, respectively. In particular, we average the frequency k, time n, of the j-th pair of microphones to obtain the peak of the i th th sound source as a result of TDOA. Therefore, we can usePeak representationTDOA of sound source is
S202 noise reduction
The presence of unmanned noise in the hybrid recordings results in false peaks in the angular spectrum. It is necessary to suppress the drone noise before picking up the highest peak from the mixture. To suppress the drone noise in the hybrid sound recordings, we use the position of the drone motor and propeller to be fixed compared to the microphone array. The drone noise source is located from the mix by using drone noise recordings of specific parameters. To preserve spatial information, the angular spectrum of the drone noise is subtracted from the angular spectrum of the mixed signal before averaging the microphone signal pairs. The angular spectrum of the sound source is expressed as
Wherein, ψ (·) mixture and ψ (·) drone are respectively the instantaneous angular spectrums of the mixed signal and the unmanned noise, and k 1,k2 is respectively the lower and upper ranges of the voice frequency band. Since the drone noise harmonics dominate in the low frequency range, the frequency summation is performed over the entire voice bandwidth. Thus, the firstThe TDOA estimates for the individual sound sources are given by the peak response:
EXAMPLE 2,
The present embodiment is an electronic device including a processor and a memory coupled to each other, where the memory stores a computer program that, when executed by the processor, causes the electronic device to perform the method of embodiment 1 above.
In this embodiment, the processor may be an integrated circuit chip with signal processing capability. The processor may be a general purpose processor. For example, the processor may be a central processing unit (Central Processing Unit, CPU), GPU, application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application.
The memory may be, but is not limited to, random access memory, read only memory, programmable read only memory, erasable programmable read only memory, electrically erasable programmable read only memory, and the like. In this embodiment, the memory may be used to store the original image, the region of each electrode slice, the coordinates of the corner points, the dual threshold processing strategy, and the like. Of course, the memory may also be used to store a program that is executed by the processing module upon receipt of an execution instruction.
It should be noted that, for convenience and brevity of description, specific working processes of the electronic device described above may refer to corresponding processes of each step in the foregoing method, and will not be described in detail herein.
From the foregoing description of the embodiments, it will be apparent to those skilled in the art that the present application may be implemented in hardware, or by means of software plus a necessary general hardware platform, and based on this understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disc, a mobile hard disk, etc.), and includes several instructions for causing a computer device (may be a personal computer, an electronic device, or a network device, etc.) to execute the method described in the respective implementation scenario of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners as well. The above-described apparatus and method embodiments are merely illustrative, for example, flow charts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. In addition, functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and variations will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (7)

1. The cross-correlation sound source DOA estimation method based on noise angle spectral subtraction is characterized by comprising the following steps: microphone location modeling and cross-correlation based DOA estimation including estimating TDOA and noise reduction using angular spectra; in the noise reduction, firstly subtracting the angular spectrum of the unmanned aerial vehicle noise from the angular spectrum of the mixed signal, and then averaging microphone signals; the microphone position modeling step includes:
S101, setting an origin with respect to a spherical coordinate system according to Q microphones attached to the unmanned aerial vehicle, wherein the position x q≡(rqqq of each microphone is q=1, …, Q represents the number of microphones, r q represents the radius, θ q represents the elevation angle, and Φ q represents the azimuth angle;
S102, let L active sites at far field of microphone array, position y l≡(rlll), let l=1, …, L, let Representing unmanned related impulse responses from an xth source to a kth microphone, and obtaining signals received by the qth th microphones;
s103, assuming that a sound source is in the far field of the unmanned aerial vehicle, ignoring scattering of a sound source signal to the unmanned aerial vehicle structure; then, modeling an unmanned related transfer function between the first th th sound source and the q th th microphone;
The estimating TDOA using angular spectrum includes:
S201, let P q(n,k)=∣Pq∣exp(iΦq (n, k)), where |p q | and Φ q (n, k) are the amplitude and phase of the q th th microphone signal, respectively;
S202, within the search boundaries of theta epsilon [ -pi/2, pi/2 ] and phi epsilon (-pi, pi ], considering a uniform grid of potential angles in three-dimensional space;
S203, setting the microphone pair index associated with the channel q, q' as J, where j=1, …, J, where Is the total number of different pairs in the array; let τ j (θ, φ) be the TDOA of the j th -th pair of microphones, obtain the instantaneous angular spectrum of the j th -th pair of microphones in (n, k) th time-frequency domains;
S204, respectively weighting the frequency domain and the time domain of the microphone signal to combine the frequency spectrum, the time and the space information of the microphone signal; averaging the frequency k and the time n of the j-th pair of microphones to obtain the peak value of the first th sound sources as a TDOA result;
in the noise reduction, the angular spectrum of the sound source is expressed as
Wherein, ψ (·) mixture and ψ (·) drone are respectively the instantaneous angular spectrums of the mixed signal and the unmanned aerial vehicle noise, and k 1,k2 is respectively the lower and upper ranges of the voice frequency band; because the noise harmonic wave of the unmanned aerial vehicle is dominant in the low frequency range, frequency summation is carried out in the whole voice bandwidth range; the TDOA estimate for the first th sound sources is given by the peak response:
2. The cross-correlation sound source DOA estimating method based on noise angle spectrum subtraction as claimed in claim 1, wherein: in the step 102, the signal received by the q th th microphone is given by equation 1
Where s l (t) is the sound signal from the l th sound source, v m,q (t) is the unmanned noise caused by the mth motor on the q th microphone, M is the number of motors on the unmanned, x is the convolution operator;
The short-time fourier transform in equation 1 is:
Where S l (n, k) and V m,q (n, k) are short-time Fourier transforms of S l (t) and V m,q (t), respectively, Is thatN and K are time frame and frequency region indices, respectively, N e {1, …, N }, K e {1, …, K }, where N and K are the number of time frames and frequency regions, respectively.
3. The cross-correlation sound source DOA estimating method based on noise angle spectrum subtraction as claimed in claim 1, wherein: the unmanned related transfer function between the first th th sound source and the q th th microphone is modeled in the step 103
Where f k denotes the frequency corresponding to the frequency domain of k th and k th is the slaveThe time difference between the origin of the signal reached by the source and the q th th microphone.
4. A cross-correlation sound source DOA estimation method based on noise angle spectrum subtraction as defined in claim 3, wherein: in the step S202, TDOA between the two microphone channels q and q' of the sound source propagating in the free field is calculated as:
In the method, in the process of the invention, Is a unit vector in the sound propagation direction, is a dot product, and c is the sound propagation speed.
5. The cross-correlation sound source DOA estimation method based on noise angle spectral subtraction according to claim 4, wherein: in the step S203, the method of the (n, k) th time-frequency-domain instantaneous angular spectrum of the jth th microphone pair is as follows:
Where exp (-i 2 pi f kτj (θ, phi)) is the inter-channel phase difference calculated theoretically for the j th th microphone pair with respect to the delay τ j (θ, phi).
6. The cross-correlation sound source DOA estimation method based on noise angle spectral subtraction according to claim 5, wherein: in the step S204, the average value of the frequency k and the time n of the j-th pair of microphones is obtained, and the result of obtaining the peak value of the first th sound sources as TDOA is specifically:
7. an electronic device, characterized in that: the electronic device comprising a processor and a memory coupled to each other, the memory storing a computer program which, when executed by the processor, causes the electronic device to perform the method of any of claims 1-6.
CN202311170984.3A 2023-09-12 2023-09-12 Cross-correlation sound source DOA estimation method based on noise angle spectral subtraction and electronic equipment Active CN117214814B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311170984.3A CN117214814B (en) 2023-09-12 2023-09-12 Cross-correlation sound source DOA estimation method based on noise angle spectral subtraction and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311170984.3A CN117214814B (en) 2023-09-12 2023-09-12 Cross-correlation sound source DOA estimation method based on noise angle spectral subtraction and electronic equipment

Publications (2)

Publication Number Publication Date
CN117214814A CN117214814A (en) 2023-12-12
CN117214814B true CN117214814B (en) 2024-08-13

Family

ID=89040069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311170984.3A Active CN117214814B (en) 2023-09-12 2023-09-12 Cross-correlation sound source DOA estimation method based on noise angle spectral subtraction and electronic equipment

Country Status (1)

Country Link
CN (1) CN117214814B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110082725A (en) * 2019-03-12 2019-08-02 西安电子科技大学 Auditory localization delay time estimation method, sonic location system based on microphone array
CN110488223A (en) * 2019-07-05 2019-11-22 东北电力大学 A sound source localization method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2517690B (en) * 2013-08-26 2017-02-08 Canon Kk Method and device for localizing sound sources placed within a sound environment comprising ambient noise
US20200184994A1 (en) * 2018-12-07 2020-06-11 Nuance Communications, Inc. System and method for acoustic localization of multiple sources using spatial pre-filtering
CN115087881B (en) * 2020-06-01 2023-04-11 华为技术有限公司 Method and device for estimating angle of arrival (AOA)
CN112487703B (en) * 2020-11-09 2024-05-28 南京信息工程大学滨江学院 Underdetermined broadband signal DOA estimation method based on sparse Bayes in unknown noise field

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110082725A (en) * 2019-03-12 2019-08-02 西安电子科技大学 Auditory localization delay time estimation method, sonic location system based on microphone array
CN110488223A (en) * 2019-07-05 2019-11-22 东北电力大学 A sound source localization method

Also Published As

Publication number Publication date
CN117214814A (en) 2023-12-12

Similar Documents

Publication Publication Date Title
US10979805B2 (en) Microphone array auto-directive adaptive wideband beamforming using orientation information from MEMS sensors
Manamperi et al. Drone audition: Sound source localization using on-board microphones
CN104076331B (en) A kind of sound localization method of seven yuan of microphone arrays
JP6109927B2 (en) System and method for source signal separation
US20170208415A1 (en) System and method for determining audio context in augmented-reality applications
US10515650B2 (en) Signal processing apparatus, signal processing method, and signal processing program
US8947978B2 (en) System and method for estimating the direction of arrival of a sound
CN111624553B (en) Sound source localization method and system, electronic device and storage medium
US20080247274A1 (en) Sensor array post-filter for tracking spatial distributions of signals and noise
CN110491403A (en) Processing method, device, medium and the speech enabled equipment of audio signal
US9799322B2 (en) Reverberation estimator
Blanchard et al. Acoustic localization and tracking of a multi-rotor unmanned aerial vehicle using an array with few microphones
CN107219512B (en) Sound source positioning method based on sound transfer function
CN102147458B (en) Method and device for estimating direction of arrival (DOA) of broadband sound source
WO2011103488A1 (en) Microphone array subset selection for robust noise reduction
Sun et al. Joint DOA and TDOA estimation for 3D localization of reflective surfaces using eigenbeam MVDR and spherical microphone arrays
Wang et al. Time-frequency processing for sound source localization from a micro aerial vehicle
US20190281386A1 (en) Apparatus and a method for unwrapping phase differences
CN113687305A (en) Method, device and equipment for positioning sound source azimuth and computer readable storage medium
CN112216295B (en) Sound source positioning method, device and equipment
CN113625273A (en) Aliasing digital signal synthetic aperture positioning method
US20190250240A1 (en) Correlation function generation device, correlation function generation method, correlation function generation program, and wave source direction estimation device
CN117214814B (en) Cross-correlation sound source DOA estimation method based on noise angle spectral subtraction and electronic equipment
Altena et al. Comparison of acoustic localisation techniques for drone position estimation using real-world experimental data
Yen et al. Noise power spectral density scaled SNR response estimation with restricted range search for sound source localisation using unmanned aerial vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant