Audio-Language Datasets of Scenes and Events: A Survey

Audio-language models (ALMs) generate linguistic descriptions of sound-producing events and scenes. Advances in dataset creation and computational power have led to significant progress in this domain. This paper surveys 69 datasets used to train ALMs, covering research up to September 2024 (<uri...

Full description

Saved in:
Bibliographic Details
Main Authors: Gijs Wijngaard, Elia Formisano, Michele Esposito, Michel Dumontier
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10854210/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Audio-language models (ALMs) generate linguistic descriptions of sound-producing events and scenes. Advances in dataset creation and computational power have led to significant progress in this domain. This paper surveys 69 datasets used to train ALMs, covering research up to September 2024 (<uri>https://github.com/GLJS/audio-datasets</uri>). The survey provides a comprehensive analysis of dataset origins, audio and linguistic characteristics, and use cases. Key sources include YouTube-based datasets such as AudioSet, with over two million samples, and community platforms such as Freesound, with over one million samples. The survey evaluates acoustic and linguistic variability across datasets through principal component analysis of audio and text embeddings. The survey also analyzes data leakage through CLAP embeddings, and examines sound category distributions to identify imbalances. Finally, the survey identifies key challenges in developing large, diverse datasets to enhance ALM performance, including dataset overlap, biases, accessibility barriers, and the predominance of English-language content, while highlighting specific areas requiring attention: multilingual dataset development, specialized domain coverage and improved dataset accessibility.
ISSN:2169-3536