ManyMusic-Stim
We1 presents the ManyMusic-Stim
dataset, an open-access music audio dataset designed for human experiments on musical emotions. This is part of the ManyMusic project, which aims to deep-phenotype affective experience evoked by music.
🤔 Another music dataset—why?
Psychological and neuroscientific research on music-evoked emotions has long been constrained by limitations in stimulus selection. Common issues include the use of artificially manipulated stimuli, copyright restrictions preventing data sharing, narrow genre sampling, and experimenter bias. The ManyMusic-Stim
dataset addresses these challenges by offering a large-scale, open-access collection of music audio files curated to ensure diverse representation across genres and styles.
🔬 How was it validated?
We conducted a series of experiments comparing the ManyMusic-Stim
dataset to commercial music. Our results show that carefully curated subsets of the dataset are comparable to well-matched commercial tracks across various subjective ratings, including liking and feeling moved. These findings provide empirical support for the use of ManyMusic-Stim
as a valid alternative to commercial music in affective research.
See our paper for more details.
😀 How to use it?
The ManyMusic-Stim
dataset is available for download from Zenodo. The dataset includes:
- audio files in
MP3
format, - metadata in
CSV
format, and - subjective ratings of musical emotions in
CSV
format.
The audio files are encoded at 320 kbps. The metadata includes information such as the title, artist, genre, and duration of each track. The subjective ratings of musical emotions include ratings for perceived valence, arousal, liking, feeling moved, familiarity, and professionalism, collected from 200+ Prolific participants.
📚 How to cite?
If you use the ManyMusic-Stim
dataset in your research, please cite the following paper:
[PAPER-INFO] [BIBTEX-INFO] [RIS-INFO]
🙏 Acknowledgments
This work was supported by the Max Planck Society (MPS) and Johanna Quandt Young Academy (JQYA) at Goethe University Frankfurt. Part of the data was presented at the International Conference on Music Perception and Cognition (ICMPC) in São Paulo, Brazil on 25 July 2025. We thank all participants for their contributions to this dataset.
👩⚖️ License
The ManyMusic-Stim
dataset is licensed under:
- Jamendo audio files: XXX
- Suno audio files: XXX
- Metadata and human ratings: CC BY-NC-SA 4.0 International License
-
The work is a collaboration between the Max Planck Institute for Empirical Aesthetics and Pompeu Fabra University. Main contributors are Seung-Goo Kim, Pablo Alonso, and Dmitry Bogdanov. Please direct any questions or comments to Seung-Goo Kim. ↩