MFF: A Deep Learning Model for Multi-Modal Image Fusion Based on Multiple Filters
Multi-modal image fusion mainly refers to the feature fusion of two or more different images taken from the same perspective range to increase the amount of information contained in an image. This study proposes a multi-modal image fusion deep network called the MFF network. Compared with traditiona...
Saved in:
| Main Authors: | Yuequn Wang, Zhengwei Li, Jianli Wang, Leqiang Yang, Bo Dong, Hanfu Zhang, Jie Liu |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10877823/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
LGFusion: Frequency-Aware Dual-Branch Integration Network for Infrared and Visible Image Fusion
by: Ruizhe Shang, et al.
Published: (2025-01-01) -
HDCTfusion: Hybrid Dual-Branch Network Based on CNN and Transformer for Infrared and Visible Image Fusion
by: Wenqing Wang, et al.
Published: (2024-12-01) -
DBSQFusion: a multimodal image fusion method based on dual-channel attention
by: Shaodong Liu, et al.
Published: (2025-08-01) -
Robust Infrared–Visible Fusion Imaging with Decoupled Semantic Segmentation Network
by: Xuhui Zhang, et al.
Published: (2025-04-01) -
Infrared and Visible Image Fusion via Residual Interactive Transformer and Cross-Attention Fusion
by: Liquan Zhao, et al.
Published: (2025-07-01)