[go: up one dir, main page]

CN101383149B - Stringed music vibrato automatic detection method - Google Patents

Stringed music vibrato automatic detection method Download PDF

Info

Publication number
CN101383149B
CN101383149B CN200810137404XA CN200810137404A CN101383149B CN 101383149 B CN101383149 B CN 101383149B CN 200810137404X A CN200810137404X A CN 200810137404XA CN 200810137404 A CN200810137404 A CN 200810137404A CN 101383149 B CN101383149 B CN 101383149B
Authority
CN
China
Prior art keywords
vibrato
music
steps
vector sequence
trill
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN200810137404XA
Other languages
Chinese (zh)
Other versions
CN101383149A (en
Inventor
韩纪庆
孙荣坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology Shenzhen
Original Assignee
Harbin Institute of Technology Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology Shenzhen filed Critical Harbin Institute of Technology Shenzhen
Priority to CN200810137404XA priority Critical patent/CN101383149B/en
Publication of CN101383149A publication Critical patent/CN101383149A/en
Application granted granted Critical
Publication of CN101383149B publication Critical patent/CN101383149B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Auxiliary Devices For Music (AREA)

Abstract

弦乐音乐颤音自动检测方法,它涉及一种在自动音乐标注过程中对弦乐音乐实时检测的方法,以解决在弦乐音乐自动标注过程中,颤音对于自动音乐标注的影响较大以及传统的自动音乐标注方法不能对音乐中的颤音进行自动检测的问题。根据弦乐常用音域的音符数将颤音分为N类,通过音频识别的方法将N类颤音模型训练为匹配对象库;将输入待检测的音乐的音频信号,对音频信号进行特征提取得到特征矢量序列;以统计出来的颤音平均周期为长度对特征矢量序列进行分段;通过音频识别的方法对每一段矢量序列进行识别;连续M或M以上段被识别为同一类颤音的矢量序列所对应的时间段即检测为颤音的时间段。本发明自动检测颤音,去除颤音对于自动音乐标注的影响。

Figure 200810137404

An automatic detection method for vibrato of string music, which relates to a method for real-time detection of string music in the process of automatic music labeling, to solve the problem that vibrato has a great influence on automatic music labeling in the process of automatic music labeling of string music and the traditional The automatic music labeling method cannot automatically detect vibrato in music. According to the number of notes in the common range of string music, the vibrato is divided into N categories, and the N-type vibrato model is trained as a matching object library by the method of audio recognition; the audio signal of the music to be detected is input, and the feature extraction is performed on the audio signal to obtain the feature vector. Sequence; the feature vector sequence is segmented with the average period of the vibrato calculated as the length; each segment of the vector sequence is identified by the method of audio recognition; consecutive M or more than M segments are identified as corresponding to the vector sequence of the same type of vibrato The time period is the time period during which vibrato is detected. The invention automatically detects vibrato and removes the influence of vibrato on automatic music labeling.

Figure 200810137404

Description

弦乐音乐颤音自动检测方法 Method for Automatic Detection of Vibrato in String Music

技术领域technical field

本发明涉及一种音频识别技术和自动音乐标注领域的检测方法,具体涉及一种在自动音乐标注过程中对弦乐音乐实时检测的方法。The invention relates to an audio recognition technology and a detection method in the field of automatic music labeling, in particular to a method for real-time detection of string music in the process of automatic music labeling.

背景技术Background technique

自动音乐标注是多媒体技术的一项重要应用,它是指通过对音乐音频信号的分析和处理,自动将其乐谱通过某种形式记录下来,以应用于辅助音乐教学、辅助音乐创作等许多音乐相关领域。虽然近年来自动音乐标注技术已经取得了长足的进步,但是至今仍有许多问题没有得到很好的解决,目前大部分研究成果都是在单个乐器独奏、主调音乐、无特殊技巧演奏等条件之上取得的,多乐器合奏的标注、复调音乐的自动标注、和旋和声的识别、颤音等特殊音效识别等复杂条件下的自动音乐标注进展缓慢。在许多弦乐器演奏的音乐中存在着大量用于修饰或表现乐曲情感、风格的颤音(在乐谱中用“tr”标记)。在针对这类乐器的自动音乐标注研究中,如果不进行颤音检测而直接进行标注是很容易出现错误的,甚至在旋律上让自动音乐标注系统摸不着头脑。一般情况下,颤音在声音效果上是两个连续的音阶快速交替出现,然而恰恰有很多音乐的片段却有非颤音的连续音阶快速交替出现的正常音符,如不加区分则会造成音乐标注上的错误(如误标注成十六分音符或三十二分音符等)。此外,又由于颤音音符出现速率的不确定性,即颤音本身只要求出现快速交替音符而并没有规定具体速率,其速率完全由乐曲需要及演奏者习惯、技术而定,因此如不予以专门检测则会在旋律上使自动音乐标注系统产生错误,目前还没有一种专门针对弦乐音乐颤音的自动检测方法。Automatic music labeling is an important application of multimedia technology. It refers to automatically recording its score in some form through the analysis and processing of music audio signals, so as to be used in many music-related fields such as auxiliary music teaching and auxiliary music creation. field. Although the automatic music labeling technology has made great progress in recent years, there are still many problems that have not been well resolved. At present, most of the research results are based on the conditions of single instrument solo, keynote music, and no special skills. Automatic music labeling under complex conditions such as labeling of multi-instrument ensembles, automatic labeling of polyphonic music, recognition of chords and chords, and recognition of special sound effects such as tremolo is progressing slowly. In the music played by many stringed instruments, there are a large number of trills (marked with "tr" in the score) used to modify or express the emotion and style of the music. In the research of automatic music labeling for this type of instrument, it is easy to make mistakes if it is directly marked without vibrato detection, and even the melody makes the automatic music labeling system confused. Under normal circumstances, vibrato is the sound effect of two consecutive scales appearing rapidly alternately. However, there are many pieces of music that have non-vibrato continuous scales appearing rapidly alternately. errors (such as mislabeled sixteenth notes or thirty-second notes, etc.). In addition, due to the uncertainty of the occurrence rate of vibrato notes, that is, vibrato itself only requires the appearance of fast alternating notes and does not specify a specific rate. It will make the automatic music labeling system produce errors on the melody. At present, there is not a kind of automatic detection method specially for the vibrato of string music.

发明内容Contents of the invention

本发明为解决在弦乐音乐自动标注过程中,颤音对于自动音乐标注的影响较大以及传统的自动音乐标注方法不能对音乐中的颤音进行自动检测的问题,提供一种弦乐音乐颤音自动检测方法。本发明由以下步骤实现:In order to solve the problem that vibrato has great influence on automatic music notation in the process of automatic labeling of string music and the problem that traditional automatic music labeling methods cannot automatically detect vibrato in music, the present invention provides an automatic detection of vibrato in string music method. The present invention is realized by the following steps:

步骤A1、根据弦乐常用音域的音符数N,将颤音分为N类,N表示自然数,通过音频识别的方法将N类颤音模型训练为匹配对象库;Step A1, according to the number N of notes in the common range of string music, trills are divided into N categories, N represents a natural number, and the N-category vibrato model is trained as a matching object library by the method of audio recognition;

步骤A2、将输入待检测的音乐的音频信号记为s(n),对音频信号s(n)进行特征提取得到特征矢量序列X={x1,x2,...,xs},S代表自然数;Step A2, denote the audio signal of the input music to be detected as s(n), perform feature extraction on the audio signal s(n) to obtain a feature vector sequence X={x 1 , x 2 ,..., x s }, S stands for natural number;

步骤A3、在分帧的基础上,以统计出来的颤音平均周期T为长度对特征矢量序列X进行分段,T代表大于0的实数;Step A3, on the basis of sub-framing, segment the feature vector sequence X with the length of the calculated vibrato average period T, where T represents a real number greater than 0;

步骤A4、通过音频识别的方法对每一段矢量序列进行识别;Step A4, identifying each segment of the vector sequence by means of audio recognition;

步骤A5、对于设定的参数M,连续M或M以上段被识别为同一类颤音的矢量序列所对应的时间段即检测为颤音的时间段。Step A5. For the set parameter M, the time period corresponding to the vector sequence of M or more consecutive segments identified as the same type of vibrato is the time period during which vibrato is detected.

有益效果:本发明通过在分帧基础上以统计出来的颤音平均周期为长度对特征矢量序列进行分段,并逐段识别,检测出弦乐音乐中的颤音片段,从而实现了对音乐中的颤音的自动检测,以达到去除颤音对于自动音乐标注的影响的目的。Beneficial effects: the present invention segments the feature vector sequence on the basis of frame division with the length of the calculated vibrato average period, and recognizes it segment by segment, and detects the vibrato segment in string music, thereby realizing the vibrato segment in the music Automatic detection of vibrato to achieve the purpose of removing the influence of vibrato on automatic music annotation.

附图说明Description of drawings

图1是步骤A5中所述的检测的方法流程图;图2是一段测试用的待检测带颤音的弦乐音乐片段的频谱图,从图2中可以看到其中大约0.200秒至2.609秒为颤音,6.889秒至7.969秒为颤音;图3是步骤A5中所述的检测的方法对图2所示的音乐片段进行检测得到的结果(示例程序在步骤A1和步骤A4的对音乐段的识别中使用了基于矢量量化的识别方法),其中横坐标为端点名称,纵坐标表示实际和检测出的颤音端点所对应的时刻,单位为秒,

Figure G200810137404XD00021
表示实际颤音端点,
Figure G200810137404XD00022
表示检测颤音端点。Fig. 1 is the method flow diagram of the detection described in step A5; Fig. 2 is the spectrogram of the string music segment of band vibrato to be detected for a section of test, can see wherein about 0.200 second to 2.609 second from Fig. 2 Tremolo, 6.889 seconds to 7.969 seconds is tremolo; Fig. 3 is the result that the method of detection described in step A5 detects the music fragment shown in Fig. 2 The recognition method based on vector quantization is used in ), where the abscissa is the name of the endpoint, and the ordinate indicates the time corresponding to the actual and detected vibrato endpoints, in seconds,
Figure G200810137404XD00021
represents the actual vibrato endpoint,
Figure G200810137404XD00022
Indicates the detection of vibrato endpoints.

具体实施方式Detailed ways

具体实施方式一:本实施方式由以下步骤组成:Specific implementation mode one: this implementation mode consists of the following steps:

步骤A1、根据弦乐常用音域的音符数N,将颤音分为N类,N表示自然数,通过音频识别的方法将N类颤音模型训练为匹配对象库;Step A1, according to the number N of notes in the common range of string music, trills are divided into N categories, N represents a natural number, and the N-category vibrato model is trained as a matching object library by the method of audio recognition;

步骤A2、将输入待检测的音乐的音频信号记为s(n),对音频信号s(n)进行特征提取得到特征矢量序列X={x1,x2,...,xs},S代表自然数;Step A2, denote the audio signal of the input music to be detected as s(n), perform feature extraction on the audio signal s(n) to obtain a feature vector sequence X={x 1 , x 2 ,..., x s }, S stands for natural number;

步骤A3、在分帧的基础上,以统计出来的颤音平均周期T为长度对特征矢量序列X进行分段,T代表大于0的实数;Step A3, on the basis of sub-framing, segment the feature vector sequence X with the length of the calculated vibrato average period T, where T represents a real number greater than 0;

步骤A4、通过音频识别的方法对每一段矢量序列进行识别;Step A4, identifying each segment of the vector sequence by means of audio recognition;

步骤A5、对于设定的参数M,连续M或M以上段被识别为同一类颤音的矢量序列所对应的时间段即检测为颤音的时间段。Step A5. For the set parameter M, the time period corresponding to the vector sequence of M or more consecutive segments identified as the same type of vibrato is the time period during which vibrato is detected.

本实施方式的步骤A1和步骤A4中采用的音频识别的方法为矢量量化方法,另外神经网络方法和隐马尔科夫模型方法也同样适用于本实施方式。在本实施方式的步骤A2中所述的特征提取的过程为:对音频信号s(n)进行采样量化和预加重处理,假设说话人信号是短时平稳的,所以说话人信号可以进行分帧处理,具体分帧方法是采用可移动的有限长度窗口进行加权的方法来实现的,对加权后的音频信号sw(n)计算Mel倒谱系数(MFCC),从而得到特征矢量序列X={x1,x2,...,xs},The audio recognition method used in step A1 and step A4 of this embodiment is a vector quantization method, and the neural network method and hidden Markov model method are also applicable to this embodiment. The feature extraction process described in step A2 of this embodiment is: perform sampling quantization and pre-emphasis processing on the audio signal s(n), assuming that the speaker signal is short-term stable, so the speaker signal can be framed Processing, the specific framing method is implemented by using a movable finite-length window to carry out weighting, and calculates the Mel cepstral coefficient (MFCC) for the weighted audio signal s w (n), thereby obtaining the feature vector sequence X={ x 1 , x 2 , ..., x s },

MFCC参数的提取过程如下:The extraction process of MFCC parameters is as follows:

(1)对输入的音频信号进行分帧、加窗,然后作离散傅立叶变换,获得频谱分布信息。(1) Framing and windowing are performed on the input audio signal, and then discrete Fourier transform is performed to obtain spectrum distribution information.

设音频信号的DFT为Let the DFT of the audio signal be

X a ( k ) = Σ n = 0 N - 1 x ( n ) e - j 2 πtk / N     0≤k≤N x a ( k ) = Σ no = 0 N - 1 x ( no ) e - j 2 πtk / N 0≤k≤N

式中x(n)为输入的音频信号,N表示傅立叶变换的点数;In the formula, x (n) is the audio signal of input, and N represents the number of points of Fourier transform;

(2)再求频谱幅度的平方,得到能量谱;(2) Find the square of the spectrum amplitude again to obtain the energy spectrum;

(3)将能量谱通过一组Mel尺度的三角形滤波器组;(3) passing the energy spectrum through a set of Mel-scale triangular filter banks;

对于步骤A5中所述的参数M,定义一个有M个滤波器的滤波器组(滤波器的个数和临界带的个数相近),采用的滤波器为三角滤波器,中心频率为f(m),m=1,2,...M,在本实施方式中令M=24;滤波器组中每个三角滤波器的跨度在Mel标度上是相等的,在本实施方式中取150Mel;三角滤波器的频率响应定义为:For the parameter M described in the step A5, define a filter group that has M filters (the number of filters is similar to the number of critical bands), the filter that adopts is a triangular filter, and the center frequency is f( m), m=1, 2, ... M, let M=24 in the present embodiment; The span of each triangular filter in the filter bank is equal on the Mel scale, takes in the present embodiment 150Mel; the frequency response of the triangular filter is defined as:

Hh mm (( kk )) == 00 kk << ff (( mm -- 11 )) 22 (( kk -- ff (( mm -- 11 )) )) (( ff (( mm ++ 11 )) -- ff (( mm -- 11 )) )) (( ff (( mm )) -- ff (( mm -- 11 )) )) ff (( mm -- 11 )) &le;&le; kk &le;&le; ff (( mm )) 22 (( ff (( mm ++ 11 )) -- kk )) (( ff (( mm ++ 11 )) -- ff (( mm -- 11 )) )) (( ff (( mm ++ 11 )) -- ff (( mm )) )) ff (( mm )) &le;&le; kk &le;&le; ff (( mm ++ 11 )) 00 kk &GreaterEqual;&Greater Equal; ff (( mm ++ 11 ))

其中 &Sigma; m = 0 M - 1 H m ( k ) = 1 in &Sigma; m = 0 m - 1 h m ( k ) = 1

(4)计算每个滤波器组输出的对数能量为:(4) Calculate the logarithmic energy output by each filter bank as:

( m ) = ln ( &Sigma; k = 0 N - 1 | X a ( k ) | 2 H m ( k ) ) 0≤m<M ( m ) = ln ( &Sigma; k = 0 N - 1 | x a ( k ) | 2 h m ( k ) ) 0≤m<M

(5)经离散余弦变换(DCT)得到MFCC系数:(5) Obtain MFCC coefficients through discrete cosine transform (DCT):

C ( n ) = &Sigma; m = 0 N - 1 S ( m ) cos ( &pi;n ( m - 0.5 ) / M )      0≤n<M C ( no ) = &Sigma; m = 0 N - 1 S ( m ) cos ( &pi;n ( m - 0.5 ) / m ) 0≤n<M

具体实施方式二:参见图1~图3,本实施方式在具体实施方式一的基础上进一步限定了步骤A5中所述的检测由以下步骤组成:Specific embodiment 2: Referring to Figures 1 to 3, this embodiment further defines that the detection described in step A5 is composed of the following steps on the basis of specific embodiment 1:

步骤B1、将计数器的值n清零,n为自然数;Step B1, clearing the value n of the counter, where n is a natural number;

步骤B2、从特征矢量序列X中取一段长度为T的矢量序列;Step B2, taking a vector sequence of length T from the feature vector sequence X;

步骤B3、通过音频识别的方法判断长度为T的矢量序列是否为颤音且同时与上一个记录的颤音类别相同,判断结果为是,则进入步骤B4,判断结果为否,则进入步骤B5;Step B3, judging whether the vector sequence of length T is a tremolo and is the same as the last recorded tremolo category by the method of audio recognition, if the judgment result is yes, then enter step B4, if the judgment result is no, then enter step B5;

步骤B4、记录该颤音的类别,计数器的值n加1并返回步骤B2;Step B4, record the category of the vibrato, add 1 to the value n of the counter and return to step B2;

步骤B5、判断计数器的值n是否大于或等于M(可令M等于3),判断结果为是,则进入步骤B6,判断结果为否,则返回步骤B1继续检测;Step B5, judge whether the value n of the counter is greater than or equal to M (may make M equal to 3), if the judgment result is yes, then enter step B6, if the judgment result is no, then return to step B1 to continue detection;

步骤B6,检测到一段颤音并输出结果;Step B6, detect a vibrato and output the result;

步骤B7、判断音频流是否结束,判断结果为是,则结束检测过程,判断结果为否,则返回步骤B1继续检测。Step B7, judging whether the audio stream is over, if the judging result is yes, then end the detection process, if the judging result is no, then return to step B1 to continue detection.

Claims (3)

1. stringed music vibrato automatic detection method is characterized in that it may further comprise the steps:
Steps A 1, count N according to the note of string music range commonly used, trill is divided into the N class, N represents natural number, and the method by audio identification is the match objects storehouse with N class trill model training;
Steps A 2, the sound signal that will import music to be detected are designated as s (n), sound signal s (n) is carried out feature extraction obtain feature vector sequence X={x 1, x 2..., x s, S represents natural number;
Steps A 3, on the basis of minute frame, be that length is carried out segmentation to feature vector sequence X with trill T average period that comes out, T representative is greater than 0 real number;
Steps A 4, each section vector sequence is discerned by the method for audio identification;
Steps A 5, for the parameter M that sets, it is the time period of trill that the pairing time period of vector sequence that continuous N or M are identified as same class trill with epimere is promptly detected.
2. stringed music vibrato automatic detection method according to claim 1, the method that it is characterized in that the audio identification described in steps A 1 and the steps A 4 are vector quantization method, neural net method or Hidden Markov Model (HMM) method.
3. stringed music vibrato automatic detection method according to claim 1 and 2 is characterized in that the detection described in the steps A 5 may further comprise the steps:
Step B1, with the value n zero clearing of counter, n is a natural number;
Step B2, from feature vector sequence X, get the vector sequence that a segment length is T;
Step B3, judge that by the method for audio identification length is whether the vector sequence of T is trill and simultaneously identical with the trill classification of a last record, judged result then enters step B4 for being, judged result then enters step B5 for denying;
Step B4, write down the classification of this trill, the value n of counter adds 1 and return step B2;
Step B5, judge counter value n whether more than or equal to M, judged result then enters step B6 for being, judged result is then returned step B1 and is continued to detect for not;
Step B6, detect one section trill and export the result;
Step B7, judge whether audio stream finishes, judged result is for being, detection of end process then, and judged result is then returned step B1 and is continued to detect for not.
CN200810137404XA 2008-10-27 2008-10-27 Stringed music vibrato automatic detection method Expired - Fee Related CN101383149B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200810137404XA CN101383149B (en) 2008-10-27 2008-10-27 Stringed music vibrato automatic detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200810137404XA CN101383149B (en) 2008-10-27 2008-10-27 Stringed music vibrato automatic detection method

Publications (2)

Publication Number Publication Date
CN101383149A CN101383149A (en) 2009-03-11
CN101383149B true CN101383149B (en) 2011-02-02

Family

ID=40462951

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200810137404XA Expired - Fee Related CN101383149B (en) 2008-10-27 2008-10-27 Stringed music vibrato automatic detection method

Country Status (1)

Country Link
CN (1) CN101383149B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930873B (en) * 2012-09-29 2014-04-09 福州大学 Information entropy based music humming detecting method
FR3022051B1 (en) * 2014-06-10 2016-07-15 Weezic METHOD FOR TRACKING A MUSICAL PARTITION AND ASSOCIATED MODELING METHOD
CN106997769B (en) * 2017-03-25 2020-04-24 腾讯音乐娱乐(深圳)有限公司 Trill recognition method and device
CN112185322B (en) * 2019-07-01 2024-02-02 抚顺革尔电声科技有限公司 Dynamic controller
CN110827859B (en) * 2019-10-15 2022-04-01 北京雷石天地电子技术有限公司 Method and device for vibrato recognition

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1168521A (en) * 1996-03-01 1997-12-24 松下电器产业株式会社 A device that adds vibrato to a singing voice
CN1645478A (en) * 2004-01-21 2005-07-27 微软公司 Segmental tonal modeling for tonal languages

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1168521A (en) * 1996-03-01 1997-12-24 松下电器产业株式会社 A device that adds vibrato to a singing voice
CN1645478A (en) * 2004-01-21 2005-07-27 微软公司 Segmental tonal modeling for tonal languages

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JP特开2008-15212A 2008.01.24
JP特开2008-39984A 2008.02.21

Also Published As

Publication number Publication date
CN101383149A (en) 2009-03-11

Similar Documents

Publication Publication Date Title
CN108417228B (en) A method for measuring the similarity of vocal timbre under the migration of musical instrument timbre
CN110111773A (en) The more New Method for Instrument Recognition of music signal based on convolutional neural networks
CN109036458A (en) A kind of multilingual scene analysis method based on audio frequency characteristics parameter
CN109545191B (en) Real-time detection method for initial position of human voice in song
CN110599987A (en) Piano note recognition algorithm based on convolutional neural network
CN104616663A (en) A Music Separation Method Combining HPSS with MFCC-Multiple Repetition Model
US9305570B2 (en) Systems, methods, apparatus, and computer-readable media for pitch trajectory analysis
CN114678039B (en) A singing evaluation method based on deep learning
CN101383149B (en) Stringed music vibrato automatic detection method
Banchhor et al. Musical instrument recognition using zero crossing rate and short-time energy
CN106997765A (en) The quantitatively characterizing method of voice tone color
Ducher et al. Folded CQT RCNN for real-time recognition of instrument playing techniques
Sonnleitner et al. A simple and effective spectral feature for speech detection in mixed audio signals
Azarloo et al. Automatic musical instrument recognition using K-NN and MLP neural networks
Yamamoto et al. Investigating time-frequency representations for audio feature extraction in singing technique classification
CN115662465A (en) Voice recognition algorithm and device suitable for national stringed instruments
Zwan et al. System for automatic singing voice recognition
Dong et al. Vocal Pitch Extraction in Polyphonic Music Using Convolutional Residual Network.
CN117012230A (en) Evaluation model for singing pronunciation and character biting
Murthy et al. Vocal and Non-vocal Segmentation based on the Analysis of Formant Structure
CN112259063B (en) A Multi-Pitch Estimation Method Based on Note Transient Dictionary and Steady-state Dictionary
Ali-MacLachlan Computational analysis of style in Irish traditional flute playing
Chaudhary et al. Musical instrument recognition using audio features with integrated entropy method
Tindale Classification of snare drum sounds using neural networks
CN112908343A (en) Acquisition method and system for bird species number based on cepstrum spectrogram

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110202

Termination date: 20211027

CF01 Termination of patent right due to non-payment of annual fee