Benchmarking of Multispectral Pansharpening: Reproducibility, Assessment, and Meta-Analysis
The term pansharpening denotes the process by which the geometric resolution of a multiband image is increased by means of a co-registered broadband panchromatic observation of the same scene having greater spatial resolution. Over time, the benchmarking of pansharpening methods has revealed itself...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2024-12-01
|
Series: | Journal of Imaging |
Subjects: | |
Online Access: | https://www.mdpi.com/2313-433X/11/1/1 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832588264031125504 |
---|---|
author | Luciano Alparone Andrea Garzelli |
author_facet | Luciano Alparone Andrea Garzelli |
author_sort | Luciano Alparone |
collection | DOAJ |
description | The term pansharpening denotes the process by which the geometric resolution of a multiband image is increased by means of a co-registered broadband panchromatic observation of the same scene having greater spatial resolution. Over time, the benchmarking of pansharpening methods has revealed itself to be more challenging than the development of new methods. Their recent proliferation in the literature is mostly due to the lack of a standardized assessment. In this paper, we draw guidelines for correct and fair comparative evaluation of pansharpening methods, focusing on the reproducibility of results and resorting to concepts of meta-analysis. As a major outcome of this study, an improved version of the additive wavelet luminance proportional (AWLP) pansharpening algorithm offers all of the favorable characteristics of an ideal benchmark, namely, performance, speed, absence of adjustable running parameters, reproducibility of results with varying datasets and landscapes, and automatic correction of the path radiance term introduced by the atmosphere. The proposed benchmarking protocol employs the haze-corrected AWLP-H and exploits meta-analysis for cross-comparisons among different experiments. After assessment on five different datasets, it was found to provide reliable and consistent results in ranking different fusion methods. |
format | Article |
id | doaj-art-356b32a2656c4fefbafdf13e5dfb7106 |
institution | Kabale University |
issn | 2313-433X |
language | English |
publishDate | 2024-12-01 |
publisher | MDPI AG |
record_format | Article |
series | Journal of Imaging |
spelling | doaj-art-356b32a2656c4fefbafdf13e5dfb71062025-01-24T13:36:13ZengMDPI AGJournal of Imaging2313-433X2024-12-01111110.3390/jimaging11010001Benchmarking of Multispectral Pansharpening: Reproducibility, Assessment, and Meta-AnalysisLuciano Alparone0Andrea Garzelli1Department of Information Engineering, University of Florence, 50139 Florence, ItalyDepartment of Information Engineering and Mathematics, University of Siena, 53100 Siena, ItalyThe term pansharpening denotes the process by which the geometric resolution of a multiband image is increased by means of a co-registered broadband panchromatic observation of the same scene having greater spatial resolution. Over time, the benchmarking of pansharpening methods has revealed itself to be more challenging than the development of new methods. Their recent proliferation in the literature is mostly due to the lack of a standardized assessment. In this paper, we draw guidelines for correct and fair comparative evaluation of pansharpening methods, focusing on the reproducibility of results and resorting to concepts of meta-analysis. As a major outcome of this study, an improved version of the additive wavelet luminance proportional (AWLP) pansharpening algorithm offers all of the favorable characteristics of an ideal benchmark, namely, performance, speed, absence of adjustable running parameters, reproducibility of results with varying datasets and landscapes, and automatic correction of the path radiance term introduced by the atmosphere. The proposed benchmarking protocol employs the haze-corrected AWLP-H and exploits meta-analysis for cross-comparisons among different experiments. After assessment on five different datasets, it was found to provide reliable and consistent results in ranking different fusion methods.https://www.mdpi.com/2313-433X/11/1/1benchmarkinghaze correctionmeta-analysispansharpeningremote sensingreproducibility |
spellingShingle | Luciano Alparone Andrea Garzelli Benchmarking of Multispectral Pansharpening: Reproducibility, Assessment, and Meta-Analysis Journal of Imaging benchmarking haze correction meta-analysis pansharpening remote sensing reproducibility |
title | Benchmarking of Multispectral Pansharpening: Reproducibility, Assessment, and Meta-Analysis |
title_full | Benchmarking of Multispectral Pansharpening: Reproducibility, Assessment, and Meta-Analysis |
title_fullStr | Benchmarking of Multispectral Pansharpening: Reproducibility, Assessment, and Meta-Analysis |
title_full_unstemmed | Benchmarking of Multispectral Pansharpening: Reproducibility, Assessment, and Meta-Analysis |
title_short | Benchmarking of Multispectral Pansharpening: Reproducibility, Assessment, and Meta-Analysis |
title_sort | benchmarking of multispectral pansharpening reproducibility assessment and meta analysis |
topic | benchmarking haze correction meta-analysis pansharpening remote sensing reproducibility |
url | https://www.mdpi.com/2313-433X/11/1/1 |
work_keys_str_mv | AT lucianoalparone benchmarkingofmultispectralpansharpeningreproducibilityassessmentandmetaanalysis AT andreagarzelli benchmarkingofmultispectralpansharpeningreproducibilityassessmentandmetaanalysis |