Brodin, 2016 - Google Patents
Working with emotions: Recommending subjective labels to music tracks using machine learningBrodin, 2016
View PDF- Document ID
- 7700810442464886531
- Author
- Brodin J
- Publication year
External Links
Snippet
Curated music collection is a growing field as a result of the freedom and supply that streaming music services like Spotify provide us with. To be able to categorize music tracks based on subjective core values in a scalable manner, this thesis has explored if …
- 238000010801 machine learning 0 title abstract description 27
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/30—Information retrieval; Database structures therefor; File system structures therefor
- G06F17/3074—Audio data retrieval
- G06F17/30755—Query formulation specially adapted for audio data retrieval
- G06F17/30758—Query by example, e.g. query by humming
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/30—Information retrieval; Database structures therefor; File system structures therefor
- G06F17/3074—Audio data retrieval
- G06F17/30749—Audio data retrieval using information manually generated or using information not derived from the audio data, e.g. title and artist information, time and location information, usage information, user ratings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/30—Information retrieval; Database structures therefor; File system structures therefor
- G06F17/3074—Audio data retrieval
- G06F17/30743—Audio data retrieval using features automatically derived from the audio content, e.g. descriptors, fingerprints, signatures, MEP-cepstral coefficients, musical score, tempo
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/30—Information retrieval; Database structures therefor; File system structures therefor
- G06F17/3061—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F17/30705—Clustering or classification
- G06F17/3071—Clustering or classification including class or cluster creation or modification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/30—Information retrieval; Database structures therefor; File system structures therefor
- G06F17/30017—Multimedia data retrieval; Retrieval of more than one type of audiovisual media
- G06F17/30023—Querying
- G06F17/30038—Querying based on information manually generated or based on information not derived from the media content, e.g. tags, keywords, comments, usage information, user ratings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/30—Information retrieval; Database structures therefor; File system structures therefor
- G06F17/3061—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F17/30634—Querying
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H2210/00—Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
- G10H2210/031—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
- G10H2210/061—Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for extraction of musical phrases, isolation of musically relevant segments, e.g. musical thumbnail generation, or for temporal structure analysis of a musical piece, e.g. determination of the movement sequence of a musical work
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
- G06N99/005—Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H1/00—Details of electrophonic musical instruments
- G10H1/0008—Associated control or indicating means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/12—Computer systems based on biological models using genetic models
- G06N3/126—Genetic algorithms, i.e. information processing using digital simulations of the genetic system
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/121—Musical libraries, i.e. musical databases indexed by musical parameters, wavetables, indexing schemes using musical parameters, musical rule bases or knowledge bases, e.g. for automatic composing methods
- G10H2240/131—Library retrieval, i.e. searching a database or selecting a specific musical piece, segment, pattern, rule or parameter set
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computer systems utilising knowledge based models
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10H—ELECTROPHONIC MUSICAL INSTRUMENTS
- G10H2240/00—Data organisation or data communication aspects, specifically adapted for electrophonic musical tools or instruments
- G10H2240/075—Musical metadata derived from musical analysis or for use in electrophonic musical instruments
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7617569B2 (en) | Methods for training neural networks to reflect emotional perception, and related systems and methods for classifying and discovering associated content and associated digital media files with embedded multi-dimensional property vectors - Patents.com | |
| Ramírez et al. | Machine learning for music genre: multifaceted review and experimentation with audioset | |
| Gunawan et al. | Music recommender system based on genre using convolutional recurrent neural networks | |
| Kim et al. | One deep music representation to rule them all? A comparative analysis of different representation learning strategies | |
| Celma | Music recommendation | |
| KR20230079186A (en) | System and method for recommending semantically related content | |
| Whitman | Learning the meaning of music | |
| Panda et al. | How does the Spotify API compare to the music emotion recognition state-of-the-art | |
| Kacprzyk | Studies in Computational Intelligence, Volume 274 | |
| Pachet et al. | Analytical features: a knowledge-based approach to audio feature generation | |
| Prockup et al. | Modeling Genre with the Music Genome Project: Comparing Human-Labeled Attributes and Audio Features. | |
| Kolozali et al. | Automatic ontology generation for musical instruments based on audio analysis | |
| Vatolkin et al. | Multi-objective investigation of six feature source types for multi-modal music classification | |
| Ding et al. | Genre classification empowered by knowledge-embedded music representation | |
| Wang et al. | Research on intelligent recognition and classification algorithm of music emotion in complex system of music performance | |
| Ismanto et al. | Indonesian Music classification on folk and dangdut genre based on rolloff spectral feature using support vector machine (SVM) algorithm | |
| Herrera et al. | SIMAC: Semantic interaction with music audio contents | |
| Yeh et al. | Popular music representation: chorus detection & emotion recognition | |
| Mendjel et al. | A new audio approach based on user preferences analysis to enhance music recommendations | |
| Brodin | Working with emotions: Recommending subjective labels to music tracks using machine learning | |
| Özseven et al. | A content analysis of the research approaches in music genre recognition | |
| Arora et al. | Soundtrack Success: Unveiling Song Popularity Patterns Using Machine Learning Implementation | |
| Humayra et al. | Music genre classification and sentiment analysis of bengali music based on various inherent audio features | |
| Pollacci et al. | The italian music superdiversity: Geography, emotion and language: one resource to find them, one resource to rule them all | |
| Castillo et al. | Predicting spotify audio features from Last. fm tags |