Deep Multimodal Learning for Seismoacoustic Fusion to Improve Earthquake‐Explosion Discrimination Within the Korean Peninsula

Abstract Recent geophysical studies have highlighted the potential utility of integrating both seismic and infrasound data to improve source characterization and event discrimination efforts. However, the influence of each of these data types within an integrated framework is not yet well‐understood...

Full description

Saved in:
Bibliographic Details
Main Authors: Miro Ronac Giannone, Stephen Arrowsmith, Junghyun Park, Brian Stump, Chris Hayward, Eric Larson, Il‐Young Che
Format: Article
Language:English
Published: Wiley 2024-07-01
Series:Geophysical Research Letters
Subjects:
Online Access:https://doi.org/10.1029/2024GL109404
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Recent geophysical studies have highlighted the potential utility of integrating both seismic and infrasound data to improve source characterization and event discrimination efforts. However, the influence of each of these data types within an integrated framework is not yet well‐understood by the geophysical community. To help elucidate the role of each data type within a merged structure, we develop a neural network which fuses seismic and infrasound array data via a gated multimodal unit for earthquake‐explosion discrimination within the Korean Peninsula. Model performance is compared before and after adding the infrasound branch. We find that the seismoacoustic model outperforms the seismic model, with the majority of the improvements stemming from the explosions class. The influence of infrasound is quantified by analyzing gated multimodal activations. Results indicate that the model relies comparatively more on the infrasound branch to correct seismic predictions.
ISSN:0094-8276
1944-8007