[go: up one dir, main page]

Foroughmand et al., 2020 - Google Patents

Extending Deep Rhythm for Tempo and Genre Estimation Using Complex Convolutions, Multitask Learning and Multi-input Network

Foroughmand et al., 2020

View PDF
Document ID
6565069096142279608
Author
Foroughmand H
Peeters G
Publication year
Publication venue
The 2020 Joint Conference on AI Music Creativity

External Links

Snippet

Tempo and genre are two inter-leaved aspects of music, genres are often associated to rhythm patterns which are played in specific tempo ranges. In this paper, we focus on the recent Deep Rhythm system based on a harmonic representation of rhythm used as an input …
Continue reading at hal.science (PDF) (other versions)

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/38Chord
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/081Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for automatic key or tonality recognition, e.g. using musical rules or a knowledge base
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/061Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for extraction of musical phrases, isolation of musically relevant segments, e.g. musical thumbnail generation, or for temporal structure analysis of a musical piece, e.g. determination of the movement sequence of a musical work
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
    • G10L25/90Pitch determination of speech signals
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0008Associated control or indicating means
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS
    • G10H2240/00Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
    • G10H2240/121Musical libraries, i.e. musical databases indexed by musical parameters, wavetables, indexing schemes using musical parameters, musical rule bases or knowledge bases, e.g. for automatic composing methods
    • G10H2240/131Library retrieval, i.e. searching a database or selecting a specific musical piece, segment, pattern, rule or parameter set
    • G10H2240/141Library retrieval matching, i.e. any of the steps of matching an inputted segment or phrase with musical database contents, e.g. query by humming, singing or playing; the steps may include, e.g. musical analysis of the input, musical feature extraction, query formulation, or details of the retrieval process

Similar Documents

Publication Publication Date Title
CN104395953B (en) The assessment of bat, chord and strong beat from music audio signal
Böck et al. Deconstruct, Analyse, Reconstruct: How to improve Tempo, Beat, and Downbeat Estimation.
EP2867887B1 (en) Accent based music meter analysis.
Davies et al. Context-dependent beat tracking of musical audio
Gillet et al. Transcription and separation of drum signals from polyphonic music
Böck et al. Accurate Tempo Estimation Based on Recurrent Neural Networks and Resonating Comb Filters.
Gkiokas et al. Music tempo estimation and beat tracking by applying source separation and metrical relations
Durand et al. Feature adapted convolutional neural networks for downbeat tracking
Heydari et al. Beatnet: Crnn and particle filtering for online joint beat downbeat and meter tracking
Foroughmand et al. Deep-rhythm for tempo estimation and rhythm pattern recognition
Chen et al. Tonet: Tone-octave network for singing melody extraction from polyphonic music
Peeters Spectral and temporal periodicity representations of rhythm for the automatic classification of music audio signal
Yazawa et al. Audio-based guitar tablature transcription using multipitch analysis and playability constraints
Jehan Downbeat prediction by listening and learning
Chiu et al. Source separation-based data augmentation for improved joint beat and downbeat tracking
Tzanetakis et al. An effective, simple tempo estimation method based on self-similarity and regularity
Foroughmand et al. Extending Deep Rhythm for Tempo and Genre Estimation Using Complex Convolutions, Multitask Learning and Multi-input Network
Raś et al. MIRAI: Multi-hierarchical, FS-tree based music information retrieval system
Sun et al. Musical Tempo Estimation Using a Multi-scale Network
Aarabi et al. Extending deep rhythm for tempo and genre estimation using complex convolutions, multitask learning and multi-input network
Heydari et al. BeatNet: A real-time music integrated beat and downbeat tracker
Rao et al. Improving polyphonic melody extraction by dynamic programming based dual f0 tracking
Ishwar Pitch estimation of the predominant vocal melody from heterophonic music audio recordings
Shandilya et al. Retrieving pitch of the singing voice in polyphonic audio
CN115472143B (en) Method and device for detecting starting point of musical note of tonal music and decoding note