The Fusion of Infrared and Visible Images via Feature Extraction and Subwindow Variance Filtering
This paper presents a subwindow variance filtering algorithm for fusing infrared and visible light images, with the goal of addressing challenges related to blurred details, low contrast, and missing edge features. First, images to be fused are subjected to multilevel decomposition using a subwindow...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2024-01-01
|
Series: | Journal of Electrical and Computer Engineering |
Online Access: | http://dx.doi.org/10.1155/2024/2641647 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832546119283900416 |
---|---|
author | Xin Feng Haifeng Gong |
author_facet | Xin Feng Haifeng Gong |
author_sort | Xin Feng |
collection | DOAJ |
description | This paper presents a subwindow variance filtering algorithm for fusing infrared and visible light images, with the goal of addressing challenges related to blurred details, low contrast, and missing edge features. First, images to be fused are subjected to multilevel decomposition using a subwindow variance filter, resulting in corresponding base and multiple detail layers. PCANet extracts features from the base layer and obtains corresponding weight maps that guide the fusion process. A saliency measurement method is proposed for detail-level fusion to extract saliency maps from the source image. The saliency maps should be compared in order to obtain the initial weight map, which is then optimized using guided filtering technology to guide the fusion of detail layers. Finally, the information of the base layer and the detail layer after fusion is superimposed to obtain an ideal fusion result. The proposed algorithm is evaluated through subjective and objective measures, including information entropy, mutual information, multiscale structural similarity measurement, standard deviation, and visual information fidelity. The results demonstrate that the proposed algorithm achieves rich detail information, high contrast, and good edge information retention, making it a promising approach for infrared and visible image fusion. |
format | Article |
id | doaj-art-2dff75508eeb4fa5aa61ba11e7323aff |
institution | Kabale University |
issn | 2090-0155 |
language | English |
publishDate | 2024-01-01 |
publisher | Wiley |
record_format | Article |
series | Journal of Electrical and Computer Engineering |
spelling | doaj-art-2dff75508eeb4fa5aa61ba11e7323aff2025-02-03T07:23:47ZengWileyJournal of Electrical and Computer Engineering2090-01552024-01-01202410.1155/2024/2641647The Fusion of Infrared and Visible Images via Feature Extraction and Subwindow Variance FilteringXin Feng0Haifeng Gong1Engineering Research Centre for Waste Oil Recovery Technology and Equipment of Ministry of EducationEngineering Research Centre for Waste Oil Recovery Technology and Equipment of Ministry of EducationThis paper presents a subwindow variance filtering algorithm for fusing infrared and visible light images, with the goal of addressing challenges related to blurred details, low contrast, and missing edge features. First, images to be fused are subjected to multilevel decomposition using a subwindow variance filter, resulting in corresponding base and multiple detail layers. PCANet extracts features from the base layer and obtains corresponding weight maps that guide the fusion process. A saliency measurement method is proposed for detail-level fusion to extract saliency maps from the source image. The saliency maps should be compared in order to obtain the initial weight map, which is then optimized using guided filtering technology to guide the fusion of detail layers. Finally, the information of the base layer and the detail layer after fusion is superimposed to obtain an ideal fusion result. The proposed algorithm is evaluated through subjective and objective measures, including information entropy, mutual information, multiscale structural similarity measurement, standard deviation, and visual information fidelity. The results demonstrate that the proposed algorithm achieves rich detail information, high contrast, and good edge information retention, making it a promising approach for infrared and visible image fusion.http://dx.doi.org/10.1155/2024/2641647 |
spellingShingle | Xin Feng Haifeng Gong The Fusion of Infrared and Visible Images via Feature Extraction and Subwindow Variance Filtering Journal of Electrical and Computer Engineering |
title | The Fusion of Infrared and Visible Images via Feature Extraction and Subwindow Variance Filtering |
title_full | The Fusion of Infrared and Visible Images via Feature Extraction and Subwindow Variance Filtering |
title_fullStr | The Fusion of Infrared and Visible Images via Feature Extraction and Subwindow Variance Filtering |
title_full_unstemmed | The Fusion of Infrared and Visible Images via Feature Extraction and Subwindow Variance Filtering |
title_short | The Fusion of Infrared and Visible Images via Feature Extraction and Subwindow Variance Filtering |
title_sort | fusion of infrared and visible images via feature extraction and subwindow variance filtering |
url | http://dx.doi.org/10.1155/2024/2641647 |
work_keys_str_mv | AT xinfeng thefusionofinfraredandvisibleimagesviafeatureextractionandsubwindowvariancefiltering AT haifenggong thefusionofinfraredandvisibleimagesviafeatureextractionandsubwindowvariancefiltering AT xinfeng fusionofinfraredandvisibleimagesviafeatureextractionandsubwindowvariancefiltering AT haifenggong fusionofinfraredandvisibleimagesviafeatureextractionandsubwindowvariancefiltering |