A dataset for environmental sound recognition in embedded systems for autonomous vehicles

Abstract Environmental sound recognition might play a crucial role in the development of autonomous vehicles by mimicking human behavior, particularly in complementing sight and touch to create a comprehensive sensory system. Just as humans rely on auditory cues to detect and respond to critical eve...

Full description

Saved in:
Bibliographic Details
Main Authors: André Luiz Florentino, Eva Laussac Diniz, Plinio Thomaz Aquino-Jr
Format: Article
Language:English
Published: Nature Portfolio 2025-07-01
Series:Scientific Data
Online Access:https://doi.org/10.1038/s41597-025-05446-2
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Environmental sound recognition might play a crucial role in the development of autonomous vehicles by mimicking human behavior, particularly in complementing sight and touch to create a comprehensive sensory system. Just as humans rely on auditory cues to detect and respond to critical events such as emergency sirens, honking horns, or the approach of other vehicles and pedestrians, autonomous vehicles equipped with advanced sound recognition capabilities may significantly enhance their situational awareness and decision-making processes. To promote this approach, we extended the UrbanSound8K (US8K) dataset, a benchmark in urban sound classification research, by merging some classes deemed irrelevant for autonomous vehicles into a new class named ‘background’ and adding the class ‘silence’ sourced from Freesound.org to complement the dataset. This tailored dataset, named UrbanSound8K for Autonomous Vehicles (US8K_AV), contains 4.94 hours of annotated audio samples with 4,908 WAV files distributed among 6 classes. It supports the development of predictive models that can be deployed in embedded systems like Raspberry Pi.
ISSN:2052-4463