Foroughmand et al., 2020 - Google Patents
Extending Deep Rhythm for Tempo and Genre Estimation Using Complex Convolutions, Multitask Learning and Multi-input NetworkForoughmand et al., 2020
View PDF- Document ID
- 6565069096142279608
- Author
- Foroughmand H
- Peeters G
- Publication year
- Publication venue
- The 2020 Joint Conference on AI Music Creativity
External Links
Snippet
Tempo and genre are two inter-leaved aspects of music, genres are often associated to rhythm patterns which are played in specific tempo ranges. In this paper, we focus on the recent Deep Rhythm system based on a harmonic representation of rhythm used as an input …
- 230000033764 rhythmic process 0 abstract description 20
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H1/00—Details of electrophonic musical instruments
- G10H1/36—Accompaniment arrangements
- G10H1/38—Chord
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
- G10H2210/081—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for automatic key or tonality recognition, e.g. using musical rules or a knowledge base
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
- G10H2210/061—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for extraction of musical phrases, isolation of musically relevant segments, e.g. musical thumbnail generation, or for temporal structure analysis of a musical piece, e.g. determination of the movement sequence of a musical work
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
- G10L25/90—Pitch determination of speech signals
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 characterised by the type of extracted parameters
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0008—Associated control or indicating means
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/121—Musical libraries, i.e. musical databases indexed by musical parameters, wavetables, indexing schemes using musical parameters, musical rule bases or knowledge bases, e.g. for automatic composing methods
- G10H2240/131—Library retrieval, i.e. searching a database or selecting a specific musical piece, segment, pattern, rule or parameter set
- G10H2240/141—Library retrieval matching, i.e. any of the steps of matching an inputted segment or phrase with musical database contents, e.g. query by humming, singing or playing; the steps may include, e.g. musical analysis of the input, musical feature extraction, query formulation, or details of the retrieval process
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104395953B (en) | The assessment of bat, chord and strong beat from music audio signal | |
Böck et al. | Deconstruct, Analyse, Reconstruct: How to improve Tempo, Beat, and Downbeat Estimation. | |
EP2867887B1 (en) | Accent based music meter analysis. | |
Davies et al. | Context-dependent beat tracking of musical audio | |
Gillet et al. | Transcription and separation of drum signals from polyphonic music | |
Böck et al. | Accurate Tempo Estimation Based on Recurrent Neural Networks and Resonating Comb Filters. | |
Gkiokas et al. | Music tempo estimation and beat tracking by applying source separation and metrical relations | |
Durand et al. | Feature adapted convolutional neural networks for downbeat tracking | |
Heydari et al. | Beatnet: Crnn and particle filtering for online joint beat downbeat and meter tracking | |
Foroughmand et al. | Deep-rhythm for tempo estimation and rhythm pattern recognition | |
Chen et al. | Tonet: Tone-octave network for singing melody extraction from polyphonic music | |
Peeters | Spectral and temporal periodicity representations of rhythm for the automatic classification of music audio signal | |
Yazawa et al. | Audio-based guitar tablature transcription using multipitch analysis and playability constraints | |
Jehan | Downbeat prediction by listening and learning | |
Chiu et al. | Source separation-based data augmentation for improved joint beat and downbeat tracking | |
Tzanetakis et al. | An effective, simple tempo estimation method based on self-similarity and regularity | |
Foroughmand et al. | Extending Deep Rhythm for Tempo and Genre Estimation Using Complex Convolutions, Multitask Learning and Multi-input Network | |
Raś et al. | MIRAI: Multi-hierarchical, FS-tree based music information retrieval system | |
Sun et al. | Musical Tempo Estimation Using a Multi-scale Network | |
Aarabi et al. | Extending deep rhythm for tempo and genre estimation using complex convolutions, multitask learning and multi-input network | |
Heydari et al. | BeatNet: A real-time music integrated beat and downbeat tracker | |
Rao et al. | Improving polyphonic melody extraction by dynamic programming based dual f0 tracking | |
Ishwar | Pitch estimation of the predominant vocal melody from heterophonic music audio recordings | |
Shandilya et al. | Retrieving pitch of the singing voice in polyphonic audio | |
CN115472143B (en) | Method and device for detecting starting point of musical note of tonal music and decoding note |