Superpixel guided spectral-spatial feature extraction and weighted feature fusion for hyperspectral image classification with limited training samples

Abstract Deep learning is a double-edged sword. The powerful feature learning ability of deep models can effectively improve classification accuracy. Still, when the training samples for each class are limited, it will not only face the problem of overfitting but also significantly affect the classi...

Full description

Saved in:
Bibliographic Details
Main Authors: Yao Li, Liyi Zhang, Lei Chen, Yunpeng Ma
Format: Article
Language:English
Published: Nature Portfolio 2025-01-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-87030-7
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Deep learning is a double-edged sword. The powerful feature learning ability of deep models can effectively improve classification accuracy. Still, when the training samples for each class are limited, it will not only face the problem of overfitting but also significantly affect the classification result. Aiming at this critical problem, we propose a novel model of spectral-spatial feature extraction and weighted fusion guided by superpixels. It aims to thoroughly “squeeze” and utilize the untapped spectral and spatial features contained in hyperspectral images from multiple angles and stages. Firstly, with the guidance of superpixels, we represent the hyperspectral image in the form of latent features and use the multi-band priority criterion to select the final discriminant features. Secondly, we design a pixel-based CNN and a two-scale superpixel-based GCN classification framework for weighted feature fusion. Compared with several excellent band selection methods, the superb performance of our feature extraction module is verified. In addition, under the condition of only five training samples for each class, we conducted comparative experiments with several of the state-of-the-art classification methods and verified the excellent performance of our method on three widely used data sets.
ISSN:2045-2322