MusiQAl: A Dataset for Music Question–Answering through Audio–Video Fusion

Music question–answering (MQA) is a machine learning task where a computational system analyzes and answers questions about music‑related data. Traditional methods prioritize audio, overlooking visual and embodied aspects crucial to music performance understanding. We introduce MusiQAl, a multimodal...

Full description

Saved in:
Bibliographic Details
Main Authors: Anna-Maria Christodoulou, Kyrre Glette, Olivier Lartillot, Alexander Refsum Jensenius
Format: Article
Language:English
Published: Ubiquity Press 2025-07-01
Series:Transactions of the International Society for Music Information Retrieval
Subjects:
Online Access:https://account.transactions.ismir.net/index.php/up-j-tismir/article/view/222
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Music question–answering (MQA) is a machine learning task where a computational system analyzes and answers questions about music‑related data. Traditional methods prioritize audio, overlooking visual and embodied aspects crucial to music performance understanding. We introduce MusiQAl, a multimodal dataset of 310 music performance videos and 11,793 human‑annotated question–answer pairs, spanning diverse musical traditions and styles. Grounded in musicology and music psychology, MusiQAl emphasizes multimodal reasoning, causal inference, and cross‑cultural understanding of performer–music interaction. We benchmark AVST and LAVISH architectures on MusiQAI, revealing strengths and limitations, underscoring the importance of integrating multimodal learning and domain expertise to advance MQA and music information retrieval.
ISSN:2514-3298