SRFNet: Multimodal Based Selective Receptive Field Neural Network for Time Series Forecast of Flood Range

Flood disaster is a typical natural disaster that causes human casualties and property losses every year. Benefiting from powerful feature abstraction capabilities and automatic tuning characteristics, deep learning has become a powerful tool for disaster prediction. Nonetheless, many existing metho...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhiqing Li, Zeqiang Chen, Lai Chen, Xu Tang, Nengcheng Chen
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10943213/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Flood disaster is a typical natural disaster that causes human casualties and property losses every year. Benefiting from powerful feature abstraction capabilities and automatic tuning characteristics, deep learning has become a powerful tool for disaster prediction. Nonetheless, many existing methods are developed for natural images and do not take into account the unique characteristics of remote-sensing images and other modal data. Furthermore, many methods are too complex to poor computational efficiency and interpretability. To this end, we proposed a multimodal based selective receptive field neural network (SRFNet). It fully adopts convolutional neural networks, which are simpler and more efficient compared to other state-of-the-art methods. It also incorporates selectively large kernel convolution for multiscale analysis of remote sensing images. In addition, the modal of rainfall and water level are also fully considered and exploited in the method to improve its performance. To verify the effectiveness and robustness of SRFNet, extensive and detailed experiments on the Dongting Lake and the Poyang Lake with the data range from year of 2010 to 2020 are conducted. As a result, our method outperforms the other seven state-of-the-art methods and can stably achieve structural similarity of more than 0.9 in multiple resolutions.
ISSN:1939-1404
2151-1535