Remote sensing image interpretation of geological lithology via a sensitive feature self-aggregation deep fusion network
Geological lithological interpretation is a key focus in Earth observation research, with applications in resource surveys, geological mapping, and environmental monitoring. Although deep learning (DL) methods has significantly improved the performance of lithological remote sensing interpretation,...
Saved in:
| Main Authors: | , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Elsevier
2025-03-01
|
| Series: | International Journal of Applied Earth Observations and Geoinformation |
| Subjects: | |
| Online Access: | http://www.sciencedirect.com/science/article/pii/S1569843225000317 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Geological lithological interpretation is a key focus in Earth observation research, with applications in resource surveys, geological mapping, and environmental monitoring. Although deep learning (DL) methods has significantly improved the performance of lithological remote sensing interpretation, its accuracy remains far below the level achieved by visual interpretation performed by domain experts. This disparity is primarily due to the heavy reliance of current intelligent lithological interpretation methods on remote sensing imagery (RSI), coupled with insufficient exploration of sensitive features (SF) and prior knowledge (PK), resulting in low interpretation precision. Furthermore, multi-modal SF and PK exhibit significant spatiotemporal heterogeneity, which hinders their direct integration into DL networks. In this work, we propose the sensitive feature self-aggregation deep fusion network (SFA-DFNet). Inspired by the visual interpretation practices of domain experts, we selected the five most commonly used SF and one type of PK as multi-modal supplementary information. To address the spatiotemporal heterogeneity of SF and PK, we designed a self-aggregation mechanism (SA-Mechanism) that dynamically selects and optimizes beneficial information from multi-modal features for lithological interpretation. This mechanism has broad applicability and can be extended to support any number of modal data. Additionally, we introduced the cross-modal feature interaction fusion module (CM-FIFM), which enhances the effective exchange and fusion of RSI, SF, and PK by leveraging long-range contextual information. Experimental results on two datasets demonstrate that differences in lithological genesis and types are critical factors affecting interpretation accuracy. Compared with seven SOTA DL models, our method achieves more than a 3% improvement in mIoU, showcasing its effectiveness and robustness. |
|---|---|
| ISSN: | 1569-8432 |