A data-driven approach to mid-level perceptual musical feature modeling (bibtex)
by Aljanaki, Anna and Soleymani, Mohammad
Abstract:
Musical features and descriptors could be coarsely divided into three levels of complexity. The bottom level contains the basic building blocks of music, e.g., chords, beats and timbre. The middle level contains concepts that emerge from combining the basic blocks: tonal and rhythmic stability, harmonic and rhythmic complexity, etc. High-level descriptors (genre, mood, expressive style) are usually modeled using the lower level ones. The features belonging to the middle level can both improve automatic recognition of high-level descriptors, and provide new music retrieval possibilities. Mid-level features are subjective and usually lack clear definitions. However, they are very important for human perception of music, and on some of them people can reach high agreement, even though defining them and therefore, designing a hand-crafted feature extractor for them can be difficult. In this paper, we derive the mid-level descriptors from data. We collect and release a dataset\textbackslashfootnote\https://osf.io/5aupt/\ of 5000 songs annotated by musicians with seven mid-level descriptors, namely, melodiousness, tonal and rhythmic stability, modality, rhythmic complexity, dissonance and articulation. We then compare several approaches to predicting these descriptors from spectrograms using deep-learning. We also demonstrate the usefulness of these mid-level features using music emotion recognition as an application.
Reference:
A data-driven approach to mid-level perceptual musical feature modeling (Aljanaki, Anna and Soleymani, Mohammad), In Proceedings of the 19th International Society for Music Information Retrieval Conference, arXiv, 2018.
Bibtex Entry:
@inproceedings{aljanaki_data-driven_2018,
	address = {Paris, France},
	title = {A data-driven approach to mid-level perceptual musical feature modeling},
	url = {https://arxiv.org/abs/1806.04903},
	abstract = {Musical features and descriptors could be coarsely divided into three levels of complexity. The bottom level contains the basic building blocks of music, e.g., chords, beats and timbre. The middle level contains concepts that emerge from combining the basic blocks: tonal and rhythmic stability, harmonic and rhythmic complexity, etc. High-level descriptors (genre, mood, expressive style) are usually modeled using the lower level ones. The features belonging to the middle level can both improve automatic recognition of high-level descriptors, and provide new music retrieval possibilities. Mid-level features are subjective and usually lack clear definitions. However, they are very important for human perception of music, and on some of them people can reach high agreement, even though defining them and therefore, designing a hand-crafted feature extractor for them can be difficult. In this paper, we derive the mid-level descriptors from data. We collect and release a dataset{\textbackslash}footnote\{https://osf.io/5aupt/\} of 5000 songs annotated by musicians with seven mid-level descriptors, namely, melodiousness, tonal and rhythmic stability, modality, rhythmic complexity, dissonance and articulation. We then compare several approaches to predicting these descriptors from spectrograms using deep-learning. We also demonstrate the usefulness of these mid-level features using music emotion recognition as an application.},
	booktitle = {Proceedings of the 19th {International} {Society} for {Music} {Information} {Retrieval} {Conference}},
	publisher = {arXiv},
	author = {Aljanaki, Anna and Soleymani, Mohammad},
	month = sep,
	year = {2018},
	keywords = {Virtual Humans}
}
Powered by bibtexbrowser