[go: up one dir, main page]

CN101835072B - Virtual Surround Sound Processing Method - Google Patents

Virtual Surround Sound Processing Method Download PDF

Info

Publication number
CN101835072B
CN101835072B CN2010101448765A CN201010144876A CN101835072B CN 101835072 B CN101835072 B CN 101835072B CN 2010101448765 A CN2010101448765 A CN 2010101448765A CN 201010144876 A CN201010144876 A CN 201010144876A CN 101835072 B CN101835072 B CN 101835072B
Authority
CN
China
Prior art keywords
sound source
virtual
source point
virtual sound
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2010101448765A
Other languages
Chinese (zh)
Other versions
CN101835072A (en
Inventor
王小军
周荣冠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AAC Technologies Pte Ltd
Original Assignee
AAC Acoustic Technologies Shenzhen Co Ltd
AAC Optoelectronic Changzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AAC Acoustic Technologies Shenzhen Co Ltd, AAC Optoelectronic Changzhou Co Ltd filed Critical AAC Acoustic Technologies Shenzhen Co Ltd
Priority to CN2010101448765A priority Critical patent/CN101835072B/en
Publication of CN101835072A publication Critical patent/CN101835072A/en
Application granted granted Critical
Publication of CN101835072B publication Critical patent/CN101835072B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Stereophonic System (AREA)

Abstract

本发明虚拟环绕声处理方法,其包括以下步骤:测得虚拟声源点N处传输函数HRTF(c,0);确定待知虚拟声源点M距离人脑中心处的水平距离设为f;经过人脑双耳和所述待知虚拟声源点M点分别作两条直线与所述虚拟声源点N所在的圆相交至L点和P点,所述L点和P点分别与y轴所成的角为a、b,设所述角度a和角度b的传输函数HRTF分别为HaR,HaL,HbR,HbL;依据已测得所述虚拟声源点N的传输函数HRTF,推导出HaR,HaL,HbR,HbL;将R*HaR输入至右耳,将R*HbL输入至左耳,即得到所述待知虚拟声源点M。本发明所述虚拟环绕声处理方法具有减少测量传输函数HRTF次数而得到待知虚拟声源点的特点。

The virtual surround sound processing method of the present invention comprises the following steps: measuring the transfer function HRTF (c, 0) at the virtual sound source point N; determining the horizontal distance between the unknown virtual sound source point M and the center of the human brain as f; Through the two ears of the human brain and the point M of the virtual sound source point to be known, two straight lines are respectively drawn to intersect the circle where the virtual sound source point N is located to point L and point P, and the points L and P are respectively connected to y The angle formed by the axes is a, b, and the transfer function HRTF of the angle a and the angle b are respectively HaR, HaL, HbR, HbL; according to the measured transfer function HRTF of the virtual sound source point N, deduce HaR, HaL, HbR, HbL; input R*HaR to the right ear, and input R*HbL to the left ear to obtain the virtual sound source point M to be known. The virtual surround sound processing method of the present invention has the characteristic of reducing the times of measuring transfer function HRTF to obtain the virtual sound source point to be known.

Description

虚拟环绕声处理方法Virtual Surround Sound Processing Method

技术领域 technical field

本发明涉及一种针对耳机音响系统的虚拟环绕声处理方法。The invention relates to a virtual surround sound processing method for an earphone sound system.

背景技术 Background technique

当用耳机欣赏立体声音乐时,为了增加立体声的包围感,我们会采用虚拟环绕声的方法,对音乐信号进行信号处理,产生不同角度不同方位的虚拟声源,加入到原始音乐信号中,增加音乐的包围感、立体感。When listening to stereo music with headphones, in order to increase the sense of envelopment of the stereo, we will use the method of virtual surround sound to process the music signal to generate virtual sound sources with different angles and directions, and add them to the original music signal to increase the sound quality of the music. A sense of envelopment and three-dimensionality.

由于人耳的耳廓效应,到达耳膜的声波频谱特征是与声源的方向有关,人耳的听觉系统在功能上相当于一个与声音空间方向有关的滤波器,对不同空间方向声音的频谱信息,都通过该滤波器处理到达耳膜,该频率特性称为人脑音频变换函数HRTF(Head-Related Transfer Function)。Due to the auricle effect of the human ear, the spectral characteristics of the sound wave reaching the eardrum are related to the direction of the sound source. The human auditory system is functionally equivalent to a filter related to the spatial direction of the sound, and the spectral information of the sound in different spatial directions , are all processed by this filter to reach the eardrum, and this frequency characteristic is called the Human Brain Audio Transformation Function HRTF (Head-Related Transfer Function).

当虚拟不同方向声源时,需要用到传输函数HRTF。比如,输入的音乐信号分为L信号和R信号,虚拟水平方向、与人脑双耳垂线呈60度角的虚拟声源时,需要用到该方向的传输函数HRTF,即HRTF(60,0)。When virtualizing sound sources from different directions, the transfer function HRTF is needed. For example, the input music signal is divided into L signal and R signal, and when the virtual sound source is in the virtual horizontal direction and at an angle of 60 degrees to the earlobe line of the human brain, the transfer function HRTF in this direction needs to be used, that is, HRTF(60,0 ).

图1是声源点A与人脑双耳垂线成60度角的HRTF函数传输路径示意图。图1中,60度方向是指以人脑中心作为原点与y轴的角度,设定声源点A位于60度方向时到左耳的传输函数HRTF为H60L,而到右耳的传输函数HRTF为H60R。由于轴对称性,声源位于300度方向的传输函数HRTF为到左耳H60R,到右耳H60L。Figure 1 is a schematic diagram of the transmission path of the HRTF function in which the sound source point A is at an angle of 60 degrees to the earlobe line of the human brain. In Figure 1, the 60-degree direction refers to the angle between the center of the human brain and the y-axis. When the sound source point A is located in the 60-degree direction, the transfer function HRTF to the left ear is H60L, and the transfer function HRTF to the right ear for H60R. Due to axial symmetry, the transfer function HRTF for a sound source located in a 300-degree direction is H60R to the left ear and H60L to the right ear.

传输函数HRTF需要利用人脑和音频测试系统来测量获得,假设已测得声源距离1.4m处传输函数HRTF的数据,便可以虚拟出距离人脑中心1.4m处的虚拟声源:The transfer function HRTF needs to be measured by the human brain and audio test system. Assuming that the data of the transfer function HRTF at a distance of 1.4m from the sound source has been measured, a virtual sound source at a distance of 1.4m from the center of the human brain can be virtualized:

将L信号虚拟成300度方向的声源,需要L*H60R输入左耳,L*H60L输入右耳;To virtualize the L signal into a sound source in a 300-degree direction, you need to input L*H60R into the left ear, and L*H60L into the right ear;

将R信号虚拟成60度方向的声源,需要R*H60L输入左耳,R*H60R输入右耳;其中,*代表卷积。To virtualize the R signal into a sound source in a 60-degree direction, R*H60L needs to be input to the left ear, and R*H60R is input to the right ear; where * represents convolution.

此时,由传输函数HRTF可以得到距离人脑中心1.4m处,方位是60度角的虚拟声源点A。At this time, the virtual sound source point A at a distance of 1.4m from the center of the human brain and an azimuth of 60 degrees can be obtained from the transfer function HRTF.

如果需要得到距离人脑中心其他距离的虚拟声源点的数据,如距离人脑中心0.3m处的虚拟声源点的数据,需要再次测量0.3m处的传输函数HRTF。然而,每一次测量传输函数HRTF不仅复杂且耗时。If it is necessary to obtain the data of the virtual sound source point at other distances from the center of the human brain, such as the data of the virtual sound source point at a distance of 0.3m from the center of the human brain, it is necessary to measure the transfer function HRTF at 0.3m again. However, measuring the transfer function HRTF each time is not only complicated but also time-consuming.

由此看来,有必要提供一种虚拟环绕声处理方法,可以有效减少测量传输函数HRTF次数而得到虚拟声源点。From this point of view, it is necessary to provide a virtual surround sound processing method, which can effectively reduce the number of HRTF measurement transfer functions to obtain virtual sound source points.

发明内容 Contents of the invention

针对现有测量传输函数HRTF复杂且耗时的问题,本发明提供一种有效减少传输函数HRTF测量次数而得到虚拟声源点的虚拟环绕声处理方法。Aiming at the complicated and time-consuming problem of measuring transfer function HRTF in the prior art, the present invention provides a virtual surround sound processing method that effectively reduces the number of transfer function HRTF measurements to obtain a virtual sound source point.

一种虚拟环绕声处理方法,包括以下步骤:A virtual surround sound processing method, comprising the following steps:

提供一耳机,耳机内置处理芯片,该处理芯片中预存头部相关传递函数HRTF,测得虚拟声源点N的传输HRTF函数(c,0),以人脑的中心作为原点,人脑双耳连线的垂线为y轴建坐标系,设虚拟声源点N到所述原点的连线与y轴的夹角为虚拟声源点N的方位角为c;Provide an earphone with a built-in processing chip, the head-related transfer function HRTF is pre-stored in the processing chip, and the transmission HRTF function (c, 0) of the virtual sound source point N is measured, taking the center of the human brain as the origin, and the human brain's two ears The vertical line of the connecting line is the y-axis to build a coordinate system, and the angle between the connecting line from the virtual sound source point N to the origin and the y-axis is that the azimuth angle of the virtual sound source point N is c;

确定待知虚拟声源点M距离人脑中心处的水平距离设为f且所述待知虚拟声源点M与所述虚拟声源点N处于方位角相同;Determine the horizontal distance between the virtual sound source point M to be known and the center of the human brain as f and the virtual sound source point M to be known is at the same azimuth as the virtual sound source point N;

以原点为圆心,虚拟声源点N与原点的距离为半径作圆,经过人脑双耳和所述待知虚拟声源点M点分别作两条直线与所述虚拟声源点N所在的圆相交至L点和P点,所述L点和P点分别与y轴所成的角为a、b,设所述角度a和角度b的传输函数HRTF分别为HaR,HaL,HbR,HbL;With the origin as the center of the circle, the distance between the virtual sound source point N and the origin is used as a radius to make a circle, pass through the ears of the human brain and the virtual sound source point M to be known, and draw two straight lines to the location where the virtual sound source point N is located. The circle intersects to point L and point P, the angles formed by the point L and point P and the y-axis are a and b respectively, and the transfer function HRTF of the angle a and angle b are respectively HaR, HaL, HbR, HbL ;

处理芯片依据已测得所述虚拟声源点N的传输函数HRTF,推导出HaR,HaL,HbR,HbL;The processing chip derives HaR, HaL, HbR, HbL according to the measured transfer function HRTF of the virtual sound source point N;

耳机将R*HaR输入至右耳,将R*HbL输入至左耳,即得到所述待知虚拟声源点M。The earphone inputs R*HaR to the right ear and R*HbL to the left ear to obtain the virtual sound source point M to be known.

作为上述虚拟环绕声处理方法的进一步改进,当输入耳机音响系统的音乐信号分为L信号和R信号时,通过虚拟环绕处理之后的音乐信号应该为:As a further improvement of the above virtual surround sound processing method, when the music signal input to the headphone audio system is divided into L signal and R signal, the music signal after virtual surround processing should be:

Lout=mxL+L*HaR+R*HbL;Lout=mxL+L*HaR+R*HbL;

Rout=mxR+R*HaR+L*HbL,其中*代表卷积,m是系数,指调节原始信号在处理后信号中的比例。Rout=mxR+R*HaR+L*HbL, where * represents convolution, and m is a coefficient, which refers to adjusting the ratio of the original signal to the processed signal.

作为上述虚拟环绕声处理方法的进一步改进,所述虚拟声源点N是将R*HcL输入左耳,R*HcR输入右耳获得。As a further improvement of the above virtual surround sound processing method, the virtual sound source point N is obtained by inputting R*HcL to the left ear and inputting R*HcR to the right ear.

作为上述虚拟环绕声处理方法的进一步改进,所述虚拟声源点N的方位角c是60度。As a further improvement of the above virtual surround sound processing method, the azimuth c of the virtual sound source point N is 60 degrees.

作为上述虚拟环绕声处理方法的进一步改进,所述虚拟声源点N距离人脑中心处的距离为1.4m。As a further improvement of the above virtual surround sound processing method, the distance between the virtual sound source point N and the center of the human brain is 1.4m.

作为上述虚拟环绕声处理方法的进一步改进,所述待知虚拟声源点M距离人脑中心处的水平距离f为0.3m。As a further improvement of the above virtual surround sound processing method, the horizontal distance f between the unknown virtual sound source point M and the center of the human brain is 0.3m.

本发明所述虚拟环绕声处理方法中,只需测量某一虚拟声源点的传输函数HRTF,就可在已测得传输函数HRTF值的基础上通过几何作图的方法推导出位于同一方位、不同水平方向的另一虚拟声源点,减少测量传输HRTF的次数。In the virtual surround sound processing method of the present invention, it is only necessary to measure the transfer function HRTF of a certain virtual sound source point, and on the basis of the measured transfer function HRTF value, it can be deduced by the method of geometric drawing in the same direction, Another virtual sound source point in a different horizontal direction, reducing the number of times to measure the transmitted HRTF.

综上所述,所述虚拟环绕声处理方法具有减少测量传输函数HRTF次数而得到虚拟声源点的特点。In summary, the virtual surround sound processing method has the characteristic of reducing the number of HRTF measurement transfer functions to obtain virtual sound source points.

附图说明 Description of drawings

图1是与本发明相关的HRTF函数传输路径示意图。Fig. 1 is a schematic diagram of the HRTF function transmission path related to the present invention.

图2是本发明虚拟环绕声处理方法中HRTF函数传输路径示意图。Fig. 2 is a schematic diagram of the HRTF function transmission path in the virtual surround sound processing method of the present invention.

图3是图2中HRTF函数(60,0)的冲击响应时域图。Fig. 3 is a time-domain diagram of the impulse response of the HRTF function (60, 0) in Fig. 2 .

具体实施方式 Detailed ways

下面结合附图对本发明的虚拟环绕声处理方法进行说明。The virtual surround sound processing method of the present invention will be described below with reference to the accompanying drawings.

本发明提供一种虚拟环绕声处理方法,其主要目的是如何通过已测得某方位虚拟声源点的传输函数HRTF来得到同方位、不同水平方向处的虚拟声源点。The present invention provides a virtual surround sound processing method, the main purpose of which is how to obtain virtual sound source points at the same azimuth but in different horizontal directions through the measured transfer function HRTF of a virtual sound source point in a certain azimuth.

图2是本发明虚拟环绕声处理方法的HRTF函数传输路径示意图。图2中,以人脑的中心作为原点,人脑双耳连线的垂线为y轴建坐标系,设虚拟声源点N到所述原点的连线与y轴的夹角为虚拟声源点N的方位角为c;虚拟声源点N沿水平方向、与人脑双耳垂线成c度方位角,该方向传输函数HRTF,即HRTF(c,0)。本实施方式中,c取60度。Fig. 2 is a schematic diagram of the HRTF function transmission path of the virtual surround sound processing method of the present invention. In Fig. 2, the center of the human brain is taken as the origin, and the vertical line connecting the two ears of the human brain is the y-axis to establish a coordinate system, and the angle between the line from the virtual sound source point N to the origin and the y-axis is the virtual sound source point N. The azimuth angle of the source point N is c; the virtual sound source point N is along the horizontal direction and forms an azimuth angle of c degrees with the earlobe line of the human brain, and the transfer function HRTF in this direction is HRTF(c, 0). In this embodiment, c is 60 degrees.

图3是HRTF(60,0)的冲击响应时域图。图3中上图为左耳的传输函数H60L,下图为右耳的传输函数H60R。传输函数HRTF需要利用人脑和音频测试系统来测量获得,图3中传输函数HRTF是所述虚拟声源点N距离人脑水平方向1.4m测得的数据。Fig. 3 is the impulse response time domain diagram of HRTF (60, 0). In Figure 3, the upper figure shows the transfer function H60L of the left ear, and the lower figure shows the transfer function H60R of the right ear. The transfer function HRTF needs to be measured by using the human brain and an audio test system. The transfer function HRTF in FIG. 3 is the data measured at a distance of 1.4m from the virtual sound source point N in the horizontal direction of the human brain.

所述虚拟声源点N的60度方位角是指以人脑的中心作为原点与y轴所夹的角,设定所述虚拟声源点N位于60度角时到人脑左耳的传输函数HRTF为H60L,到人脑右耳的传输函数HRTF为H60R。The 60-degree azimuth of the virtual sound source point N refers to the angle between the center of the human brain as the origin and the y-axis, and the transmission to the left ear of the human brain is set when the virtual sound source point N is located at an angle of 60 degrees. The function HRTF is H60L, and the transfer function HRTF to the right ear of the human brain is H60R.

当输入耳机音响系统的音乐信号分为L信号和R信号时,利用图3中的数据,可以虚拟出沿水平方向、距离人脑中心1.4m处的所述虚拟声源点N:When the music signal input to the earphone audio system is divided into L signal and R signal, the virtual sound source point N along the horizontal direction and 1.4m away from the center of the human brain can be virtualized by using the data in Fig. 3:

将L信号虚拟成300度方向的声源,需要L*H60R输入左耳,L*H60L输入右耳;To virtualize the L signal into a sound source in a 300-degree direction, you need to input L*H60R into the left ear, and L*H60L into the right ear;

将R信号虚拟成60度方向的声源,需要R*H60L输入左耳,R*H60R输入右耳;To virtualize the R signal into a sound source in a 60-degree direction, you need to input R*H60L into the left ear and R*H60R into the right ear;

那么,通过虚拟环绕处理之后的音乐信号应该为:Then, the music signal after virtual surround processing should be:

Lout=mxL+L*H60R+R*H60L;Lout=mxL+L*H60R+R*H60L;

Rout=mxR+R*H60R+L*H60LRout=mxR+R*H60R+L*H60L

其中,*代表卷积,m是系数,指调节原始信号在处理后信号中的比例。Among them, * represents convolution, and m is a coefficient, which refers to adjusting the ratio of the original signal to the processed signal.

如此,图2中所述虚拟声源点N即表示距离人脑中心1.4m、方位为60度角的虚拟声源点。即是说,将R*H60L输入左耳,R*H60R输入右耳,便能感觉到声音是从该点发出。In this way, the virtual sound source point N in FIG. 2 represents a virtual sound source point 1.4m away from the center of the human brain and with an azimuth of 60 degrees. That is to say, if you put R*H60L into the left ear and R*H60R into the right ear, you can feel the sound coming from that point.

接着,利用已测得1.4m处传输函数HRTF的数据虚拟距离人脑中心的水平距离为f且所述待知虚拟声源点M与所述虚拟声源点N处于方位角相同、与人脑的双耳连线的垂线呈角度方向仍为60度角的第二位置,即待知虚拟声源点M。本实施方式中,f取0.3m。Then, the horizontal distance from the center of the human brain to the center of the virtual human brain is f using the data of the measured transmission function HRTF at 1.4m, and the virtual sound source point M to be known is at the same azimuth as the virtual sound source point N, which is the same as that of the human brain. The vertical line of the binaural connection is the second position where the angular direction is still 60 degrees, that is, the virtual sound source point M to be known. In this embodiment, f takes 0.3m.

具体做法如下所述:The specific method is as follows:

如图3所示,以原点为圆心,虚拟声源点N与原点的距离为半径作圆,通过双耳和所述待知虚拟声源点M作两条直线与以人脑中心为圆点、1.4m为半径的圆相交至L点和P点,得到虚线OL和虚线OP,所述虚线OL与y轴所成角设为a,所述虚线OP与y轴所成角设为b。由于HRTF函数的最初一些采样点近似为零,对应的是声源到双耳的传输延时,即声音从声源处传输到双耳所用时间里的采样值为零。图3中,L点到右耳与M点到右耳的传输函数在最初的若干零值采样点不同,同理,P点到左耳与M点到左耳的传输函数在最初的若干零值采样点不同。而L点到M点与P点到M点的采样点最多只相差几个点,可以近似为相等。所以,可以用角度a与b的传输函数HRTF代替60度的传输函数HRTF。As shown in Figure 3, take the origin as the center of the circle, the distance between the virtual sound source point N and the origin is the radius to make a circle, and draw two straight lines through the ears and the virtual sound source point M to be known and take the center of the human brain as a circle point , 1.4m is a circle with a radius intersecting to point L and point P to obtain a dotted line OL and a dotted line OP. The angle formed by the dotted line OL and the y-axis is set as a, and the angle formed by the dotted line OP and the y-axis is set as b. Since the first sampling points of the HRTF function are approximately zero, it corresponds to the transmission delay from the sound source to both ears, that is, the sampling value in the time it takes for the sound to transmit from the sound source to both ears is zero. In Figure 3, the transfer functions from point L to the right ear and from point M to the right ear are different in the first few zero value sampling points. Value sampling points are different. The difference between the sampling points from point L to point M and from point P to point M is only a few points at most, which can be approximated as equal. Therefore, the transfer function HRTF of angles a and b can be used instead of the transfer function HRTF of 60 degrees.

即是说,只要知道1.4m处,角度a和角度b的传输函数HRTF、HaR、HaL、HbR与HbL,即可虚拟出所述待知虚拟声源点M:将R*HaR输入至右耳,将R*HbL输入至左耳,即能感觉到声音是从所述待知虚拟声源点M发出。That is to say, as long as the transmission functions HRTF, HaR, HaL, HbR and HbL of angle a and angle b at 1.4m are known, the virtual sound source point M to be known can be virtualized: input R*HaR to the right ear , input R*HbL to the left ear, and you can feel that the sound is emitted from the to-be-known virtual sound source point M.

根据轴对称原理,也可虚拟出所述待知虚拟声源点M关于y轴对称在以0.3m为半径的圆上的另一点的虚拟声源:将L*HbL输入至右耳,将L*HaR输入至左耳即可。通过虚拟环绕处理之后的音乐信号应该为:According to the principle of axial symmetry, the virtual sound source of the virtual sound source point M to be known can also be virtualized at another point on a circle with a radius of 0.3m symmetrical to the y-axis: input L*HbL to the right ear, and L *HaR can be input to the left ear. The music signal after virtual surround processing should be:

Lout=mxL+L*HaR+R*HbL;Lout=mxL+L*HaR+R*HbL;

Rout=mxR+R*HaR+L*HbL,Rout=mxR+R*HaR+L*HbL,

本发明所述虚拟环绕声处理方法,通过已测得1.4m处的传输函数HRTF数据,利用几何作图方法,得出角度a和角度b后,再依据已测得的传输函数HRTF(60,0)值,即可推导出0.3m处的传输函数HRTF值,无需再次测量0.3m处的传输函数HRTF值,减少测量传输函数HRTF的次数而得到想要的所述待知虚拟声源点M。The virtual surround sound processing method of the present invention, through the measured transfer function HRTF data at 1.4m, utilizes the geometric drawing method to obtain angle a and angle b, and then according to the measured transfer function HRTF (60, 0) value, the transfer function HRTF value at 0.3m can be deduced, without the need to measure the transfer function HRTF value at 0.3m again, and the number of times to measure the transfer function HRTF can be reduced to obtain the desired virtual sound source point M to be known .

故,本发明所述虚拟环绕声处理方法,只需测量某一虚拟声源点的传输函数HRTF,就可在已测得传输函数HRTF值的基础上通过几何作图的方法推导出同一方位、不同水平方向的另一虚拟声源点,减少测量传输HRTF的次数。Therefore, the virtual surround sound processing method of the present invention only needs to measure the transfer function HRTF of a certain virtual sound source point, and can derive the same orientation, Another virtual sound source point in a different horizontal direction, reducing the number of times to measure the transmitted HRTF.

而HRTF函数的计算,可以借助内置在耳机中的处理芯片处理,该处理芯片中预存HRTF函数。The calculation of the HRTF function can be processed by means of a processing chip built in the earphone, and the HRTF function is pre-stored in the processing chip.

综上所述,所述虚拟环绕声处理方法具有减少测量传输HRTF函数次数而得到虚拟声源点的特点。To sum up, the virtual surround sound processing method has the characteristics of reducing the number of times of measuring and transmitting HRTF functions to obtain virtual sound source points.

以上仅为本发明的优选实施案例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above are only preferred implementation examples of the present invention, and are not intended to limit the present invention. For those skilled in the art, the present invention may have various modifications and changes. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included within the protection scope of the present invention.

Claims (6)

1.一种用于耳机音响系统的虚拟环绕声处理方法,包括以下步骤:1. A virtual surround sound processing method for an earphone sound system, comprising the following steps: 提供一耳机,耳机内置一处理芯片,该处理芯片中预存头部相关传递函数HRTF,测得虚拟声源点N的传输函数HRTF(c,0),以人脑的中心作为原点,人脑双耳连线的垂线为y轴建坐标系,设虚拟声源点N到所述原点的连线与y轴的夹角为虚拟声源点N的方位角为c;Provide an earphone with a built-in processing chip. The head-related transfer function HRTF is pre-stored in the processing chip, and the transfer function HRTF(c, 0) of the virtual sound source point N is measured. Taking the center of the human brain as the origin, the human brain double The vertical line of the ear connection is the y-axis to build a coordinate system, and the angle between the line from the virtual sound source point N to the origin and the y-axis is the azimuth angle of the virtual sound source point N as c; 确定待知虚拟声源点M距离人脑中心处的水平距离设为f且所述待知虚拟声源点M与所述虚拟声源点N处于方位角相同;Determine the horizontal distance between the virtual sound source point M to be known and the center of the human brain as f and the virtual sound source point M to be known is at the same azimuth as the virtual sound source point N; 以原点为圆心,虚拟声源点N与原点的距离为半径作圆,经过人脑双耳和所述待知虚拟声源点M点分别作两条直线与所述虚拟声源点N所在的圆相交至L点和P点,所述L点和P点分别与y轴所成的角为a、b,设所述角度a和角度b的传输函数HRTF分别为HaR,HaL,HbR,HbL;With the origin as the center of the circle, the distance between the virtual sound source point N and the origin is used as a radius to make a circle, pass through the ears of the human brain and the virtual sound source point M to be known, and draw two straight lines to the location where the virtual sound source point N is located. The circle intersects to point L and point P, the angles formed by the point L and point P and the y-axis are a and b respectively, and the transfer function HRTF of the angle a and angle b are respectively HaR, HaL, HbR, HbL ; 处理芯片依据已测得所述虚拟声源点N的传输函数HRTF,推导出HaR,HaL,HbR,HbL;The processing chip derives HaR, HaL, HbR, HbL according to the measured transfer function HRTF of the virtual sound source point N; 耳机将R*HaR输入至右耳,将R*HbL输入至左耳,即得到所述待知虚拟声源点M。The earphone inputs R*HaR to the right ear and R*HbL to the left ear to obtain the virtual sound source point M to be known. 2.根据权利要求1所述的虚拟环绕声处理方法,其特征在于:当输入耳机音响系统的音乐信号分为L信号和R信号时,通过虚拟环绕处理之后的音乐信号应该为:2. virtual surround sound processing method according to claim 1, is characterized in that: when the music signal of input earphone sound system is divided into L signal and R signal, by the music signal after virtual surround processing should be: Lout=mxL+L*HaR+R*HbL;Rout=mxR+R*HaR+L*HbL,其中*代表卷积,m是系数,指调节原始信号在处理后信号中的比例。Lout=mxL+L*HaR+R*HbL; Rout=mxR+R*HaR+L*HbL, where * represents convolution, and m is a coefficient, which refers to adjusting the ratio of the original signal to the processed signal. 3.根据权利要求2所述的虚拟环绕声处理方法,其特征在于:所述虚拟声源点N是将R*HcL输入左耳,将R*HcR输入右耳获得。3. The virtual surround sound processing method according to claim 2, wherein the virtual sound source point N is obtained by inputting R*HcL to the left ear and inputting R*HcR to the right ear. 4.根据权利要求3所述的虚拟环绕声处理方法,其特征在于:所述虚拟声源点N的方位角c是60度。4. The virtual surround sound processing method according to claim 3, characterized in that: the azimuth c of the virtual sound source point N is 60 degrees. 5.根据权利要求4所述的虚拟环绕声处理方法,其特征在于:所述虚拟声源点N距离人脑中心处的距离为1.4m。5. The virtual surround sound processing method according to claim 4, characterized in that: the distance between the virtual sound source point N and the center of the human brain is 1.4m. 6.根据权利要求5所述的虚拟环绕声处理方法,其特征在于:所述待知虚拟声源点M距离人脑中心处的水平距离f为0.3m。6. The virtual surround sound processing method according to claim 5, characterized in that: the horizontal distance f between the unknown virtual sound source point M and the center of the human brain is 0.3m.
CN2010101448765A 2010-04-06 2010-04-06 Virtual Surround Sound Processing Method Expired - Fee Related CN101835072B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101448765A CN101835072B (en) 2010-04-06 2010-04-06 Virtual Surround Sound Processing Method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101448765A CN101835072B (en) 2010-04-06 2010-04-06 Virtual Surround Sound Processing Method

Publications (2)

Publication Number Publication Date
CN101835072A CN101835072A (en) 2010-09-15
CN101835072B true CN101835072B (en) 2011-11-23

Family

ID=42718969

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101448765A Expired - Fee Related CN101835072B (en) 2010-04-06 2010-04-06 Virtual Surround Sound Processing Method

Country Status (1)

Country Link
CN (1) CN101835072B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9037468B2 (en) 2008-10-27 2015-05-19 Sony Computer Entertainment Inc. Sound localization for user in motion
US10585472B2 (en) 2011-08-12 2020-03-10 Sony Interactive Entertainment Inc. Wireless head mounted display with differential rendering and sound localization
US10209771B2 (en) 2016-09-30 2019-02-19 Sony Interactive Entertainment Inc. Predictive RF beamforming for head mounted display
CN103631270B (en) * 2013-11-27 2016-01-13 中国人民解放军空军航空医学研究所 Guide rail rotary chain drive sound source position regulates manned HRTF measuring circurmarotate
CN106303832B (en) 2016-09-30 2019-12-27 歌尔科技有限公司 Loudspeaker, method for improving directivity, head-mounted equipment and method
CN107172566B (en) * 2017-05-11 2019-01-01 广州酷狗计算机科技有限公司 Audio-frequency processing method and device
WO2018210974A1 (en) * 2017-05-16 2018-11-22 Gn Hearing A/S A method for determining distance between ears of a wearer of a sound generating object and an ear-worn, sound generating object
CN107182003B (en) * 2017-06-01 2019-09-27 西南电子技术研究所(中国电子科技集团公司第十研究所) Airborne three-dimensional call virtual auditory processing method
CN113645531B (en) * 2021-08-05 2024-04-16 高敬源 Earphone virtual space sound playback method and device, storage medium and earphone

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100416757B1 (en) * 1999-06-10 2004-01-31 삼성전자주식회사 Multi-channel audio reproduction apparatus and method for loud-speaker reproduction
TWI251153B (en) * 2003-10-24 2006-03-11 Univ Nat Chiao Tung Method of composition operation of high efficiency head related transfer function
US7634092B2 (en) * 2004-10-14 2009-12-15 Dolby Laboratories Licensing Corporation Head related transfer functions for panned stereo audio content
US8027479B2 (en) * 2006-06-02 2011-09-27 Coding Technologies Ab Binaural multi-channel decoder in the context of non-energy conserving upmix rules
KR100829560B1 (en) * 2006-08-09 2008-05-14 삼성전자주식회사 Method and apparatus for encoding / decoding multi-channel audio signal, Decoding method and apparatus for outputting multi-channel downmixed signal in 2 channels

Also Published As

Publication number Publication date
CN101835072A (en) 2010-09-15

Similar Documents

Publication Publication Date Title
CN101835072B (en) Virtual Surround Sound Processing Method
US20130177166A1 (en) Head-related transfer function (hrtf) selection or adaptation based on head size
US9609436B2 (en) Systems and methods for audio creation and delivery
CN113170272B (en) Near-field audio rendering
US10341775B2 (en) Apparatus, method and computer program for rendering a spatial audio output signal
US20120328107A1 (en) Audio metrics for head-related transfer function (hrtf) selection or adaptation
JP2023530479A (en) Spatialized audio for mobile peripherals
US11611841B2 (en) Audio processing method and apparatus
WO2021134662A1 (en) Signal processing apparatus, method and system
CN108076400A (en) A kind of calibration and optimization method for 3D audio Headphone reproducings
US11863964B2 (en) Audio processing method and apparatus
CN105933818A (en) Method and system for implementing phantom centrally-mounted channel in three-dimensional acoustic field reconstruction of earphone
CN116208907A (en) Spatial audio processing device, device, method and headphones
WO2022227921A1 (en) Audio processing method and apparatus, wireless headset, and computer readable medium
CN110446140B (en) Sound signal adjustment system and method thereof
US10735885B1 (en) Managing image audio sources in a virtual acoustic environment
CN116233730A (en) Spatial audio processing device, apparatus, method and headphone
Mokhtari et al. Computer simulation of KEMAR's head-related transfer functions: Verification with measurements and acoustic effects of modifying head shape and pinna concavity
CN110740415B (en) Sound effect output device, computing device and sound effect control method thereof
CN109587619B (en) Three-channel non-center point sound field reconstruction method, device, storage medium and device
US20240089687A1 (en) Spatial audio adjustment for an audio device
CN109963232A (en) Audio signal playback device and corresponding audio signal processing method
TWI824522B (en) Audio playback system
US20240056756A1 (en) Method for Generating a Personalised HRTF
WO2025065317A1 (en) Audio processing apparatus and method, and extended reality apparatus, device and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20170526

Address after: Singapore Ang Mo Kio 65 Street No. 10 techpoint Building 1 floor, No. 8

Co-patentee after: AAC MICROTECH (CHANGZHOU) Co.,Ltd.

Patentee after: AAC TECHNOLOGIES Pte. Ltd.

Address before: 518057 Nanshan District province high tech Industrial Park, Shenzhen, North West New Road, No. 18

Co-patentee before: AAC MICROTECH (CHANGZHOU) Co.,Ltd.

Patentee before: AAC ACOUSTIC TECHNOLOGIES (SHENZHEN) Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20180905

Address after: No. 8, 2 floor, 85 Cavendish Science Park Avenue, Singapore

Patentee after: AAC TECHNOLOGIES Pte. Ltd.

Address before: Singapore Ang Mo Kio 65 Street No. 10 techpoint Building 1 floor, No. 8

Co-patentee before: AAC MICROTECH (CHANGZHOU) Co.,Ltd.

Patentee before: AAC TECHNOLOGIES Pte. Ltd.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20111123