[go: up one dir, main page]

skip to main content
10.1145/1859799.1859813acmotherconferencesArticle/Chapter ViewAbstractPublication PagesamConference Proceedingsconference-collections
research-article

The Musical Avatar: a visualization of musical preferences by means of audio content description

Published: 15 September 2010 Publication History

Abstract

The music we like (i.e. our musical preferences) encodes and communicates key information about ourselves. Depicting such preferences in a condensed and easily understandable way is very appealing, especially considering the current trends in social network communication. In this paper we propose a method to automatically generate, given a provided set of preferred music tracks, an iconic representation of a user's musical preferences -- the Musical Avatar. Starting from the raw audio signal we first compute over 60 low-level audio features. Then, by applying pattern recognition methods, we infer a set of semantic descriptors for each track in the collection. Next, we summarize these track-level semantic descriptors, obtaining a user profile. Finally, we map this collection-wise description to the visual domain by creating a humanoid cartoony character that represents the user's musical preferences. We performed a proof-of-concept evaluation of the proposed method on 11 subjects with promising results. The analysis of the users' evaluations shows a clear preference for avatars generated by the proposed semantic descriptors over avatars derived from neutral or randomly generated values. We also found a general agreement on the representativeness of the users' musical preferences via the proposed visualization strategy.

References

[1]
}}N. Ahmed, E. de Aguiar, C. Theobalt, M. Magnor, and H.-P. Seidel. Automatic generation of personalized human avatars from multi-view video. In Proceedings of the ACM symposium on Virtual reality software and technology, pages 257--260, 2005.
[2]
}}J. Aucouturier. Sounds like teen spirit: Computational insights into the grounding of everyday musical terms. In J. Minett and W. Wang, editors, Language, Evolution and the Brain, pages 35--64, 2009.
[3]
}}D. Bogdanov, J. Serrà, N. Wack, and P. Herrera. From low-level to high-level: Comparative study of music similarity measures. In International Workshop on Advances in Music Information Research, 2009.
[4]
}}P. M. Brossier. Automatic Annotation of Musical Audio for Interactive Applications. PhD thesis, QMUL, London, UK, 2007.
[5]
}}O. Celma and X. Serra. Foafing the music: Bridging the semantic gap in music recommendation. Web Semantics: Science, Services and Agents on the World Wide Web, 6(4):250--256, 2008.
[6]
}}W. Chai and B. Vercoe. Using user models inmusic information retrieval systems. In International Symposium on Music Information Retrieval, 2000.
[7]
}}S. Downie, D. Byrd, and T. Crawford. Ten years of ISMIR: Reflections on challenges and opportunities. Proceedings of the 10th International Society for Music Information Retrieval Conference, pages 13--18, 2009.
[8]
}}M. Grimaldi and P. Cunningham. Experimenting with music taste prediction by user profiling. In MIR '04: Proceedings of the 6th ACM SIGMM international workshop on Multimedia information retrieval, pages 173--180, 2004.
[9]
}}G. Grimmett. Probability and random processes. Oxford University Press, 3rd edition, 2001.
[10]
}}E. Gómez. Tonal Description of Music Audio Signals. PhD thesis, UPF, Barcelona, Spain, 2006.
[11]
}}M. A. Hall. Correlation-based feature selection for discrete and numeric class machine learning. In Proceedings of the International Conference on Machine Learning, pages 359--366, 2000.
[12]
}}J. Holm, A. Aaltonen, and H. Siirtola. Associating colours with musical genres. Journal of New Music Research, 38(1):87--100, March 2009.
[13]
}}C. Huberty. Applied MANOVA and discriminant analysis. Wiley-Interscience, Hoboken N.J., 2nd edition, 2006.
[14]
}}C. Laurier, O. Meyers, J. Serrà, M. Blech, P. Herrera, and X. Serra. Indexing music by mood: Design and integration of an automatic content-based annotator. Multimedia Tools and Applications, 2009.
[15]
}}D. Levitin. The world in six songs: how the musical brain created human nature. Plume, New York, 2009.
[16]
}}M. Maffesoli. The Time of the Tribes: The Decline of Individualism in Mass Society. Sage, 1995.
[17]
}}M. Mandel and D. Ellis. Song-level features and support vector machines for music classification. In Proceedings of the 6th International Symposium on Music Information Retrieval, pages 594--599, 2005.
[18]
}}S. McCloud. Understanding Comics: The Invisible Art. Kitchen Sink Press, 1993.
[19]
}}A. North and D. Hargreaves. Music and adolescent identity. Music Education Research, 1(1):75--92, 1999.
[20]
}}A. C. North and D. J. Hargreaves. Lifestyle correlates of musical preference. Psychology of Music, 35(1):58--87, 2007.
[21]
}}G. Peeters. A large set of audio features for sound description (similarity and classification) in the CUIDADO project. CUIDADO Project Report, 2004.
[22]
}}E. Petajan. Real-Time Vision for Human-Computer Interaction, chapter MPEG-4 Face and Body Animation Coding Applied to HCI. Springer US, 2005.
[23]
}}P. Rentfrow and S. Gosling. The Do Re Mi's of everyday life: The structure and personality correlates of music preferences. Journal of Personality and Social Psychology, 84(6):1236--1256, 2003.
[24]
}}P. Rentfrow and S. Gosling. Message in a ballad. Psychological Science, 17(3):236--242, 2006.
[25]
}}D. Sauer and Y.-H. Yang. Music-driven character animation. ACM Transactions Multimedia Computing, Communications and Applications, 5(4):1--16, 2009.
[26]
}}S. Stober and A. Nürnberger. User-adaptive music information retrieval. Künstliche Intelligenz, 23(2):54--57, 2009.
[27]
}}S. Streich. Music complexity: a multi-faceted description of audio content. PhD thesis, UPF, Barcelona, Spain, 2007.
[28]
}}V. Vapnik. The Nature of Statistical Learning Theory (Information Science and Statistics). Springer, 2nd edition, 1999.
[29]
}}M. Voong and R. Beale. Music organisation using colour synaesthesia. In CHI '07: extended abstracts on Human factors in computing systems, pages 1869--1874, 2007.
[30]
}}C. Xu, N. C. Maddage, X. Shao, F. Cao, and Q. Tian. Musical genre classification using support vector machines. In IEEE International Conference on Acoustics, Speech, and Signal Processing, pages 429--432, 2003.

Cited By

View all
  • (2022)A survey on emotional visualization and visual analysisJournal of Visualization10.1007/s12650-022-00872-526:1(177-198)Online publication date: 10-Sep-2022
  • (2021)Exploiting MUSIC model to solve cold-start user problem in content-based music recommender systemsIntelligent Decision Technologies10.3233/IDT-210196(1-12)Online publication date: 3-Dec-2021
  • (2020)A Survey on Visualizations for Musical DataComputer Graphics Forum10.1111/cgf.1390539:6(82-110)Online publication date: 5-Mar-2020
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
AM '10: Proceedings of the 5th Audio Mostly Conference: A Conference on Interaction with Sound
September 2010
156 pages
ISBN:9781450300469
DOI:10.1145/1859799
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

  • The Interactive Institute AB

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 September 2010

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. audio content analysis
  2. music information research (MIR)
  3. music visualization
  4. semantic retrieval
  5. user modeling

Qualifiers

  • Research-article

Conference

AM '10
Sponsor:
AM '10: The 5th Audio Mostly Conference
September 15 - 17, 2010
Piteå, Sweden

Acceptance Rates

Overall Acceptance Rate 177 of 275 submissions, 64%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)16
  • Downloads (Last 6 weeks)1
Reflects downloads up to 10 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2022)A survey on emotional visualization and visual analysisJournal of Visualization10.1007/s12650-022-00872-526:1(177-198)Online publication date: 10-Sep-2022
  • (2021)Exploiting MUSIC model to solve cold-start user problem in content-based music recommender systemsIntelligent Decision Technologies10.3233/IDT-210196(1-12)Online publication date: 3-Dec-2021
  • (2020)A Survey on Visualizations for Musical DataComputer Graphics Forum10.1111/cgf.1390539:6(82-110)Online publication date: 5-Mar-2020
  • (2019)Semantic audio content-based music recommendation and visualization based on user preference examplesInformation Processing and Management: an International Journal10.1016/j.ipm.2012.06.00449:1(13-33)Online publication date: 22-Nov-2019
  • (2014)Using adaptive avatars for visualizing recent music listening history and supporting music discoveryProceedings of the 11th Conference on Advances in Computer Entertainment Technology10.1145/2663806.2663820(1-10)Online publication date: 11-Nov-2014
  • (2014)Search result visualization with characters for childrenProceedings of the 2014 conference on Interaction design and children10.1145/2593968.2593983(125-134)Online publication date: 17-Jun-2014
  • (2012)A Comparison of Methods for Visualizing Musical GenresProceedings of the 2012 16th International Conference on Information Visualisation10.1109/IV.2012.107(636-645)Online publication date: 11-Jul-2012

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media