Dual-Alignment CLIP: Task-Specific Alignment of Text and Visual Features for Few-Shot Remote Sensing Scene Classification

Convolutional neural networks (CNNs) are widely adopted for remote sensing image scene classification. However, labeling of large annotated remote sensing datasets is costly and time consuming, which limits the applicability of CNNs for real-world. Inspired by human ability, few-shot image classific...

Full description

Saved in:
Bibliographic Details
Main Authors: Dongmei Deng, Ping Yao
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11083761/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Convolutional neural networks (CNNs) are widely adopted for remote sensing image scene classification. However, labeling of large annotated remote sensing datasets is costly and time consuming, which limits the applicability of CNNs for real-world. Inspired by human ability, few-shot image classification offers a promising solution by utilizing limited labeled data. Recently, contrastive vision-language pretraining (CLIP) has shown impressive few-shot image classification performance in downstream remote sensing tasks. However, existing CLIP-based methods still have two essential issues: 1) bias in text features; 2) unreliable similarity in image features. To address these issues, we design a multilevel image–text feature alignment (MITA) component to align the multimodal embeddings with visual-guided text features from instance, class, and random level, and an image–image feature alignment (IIA) component to reliably measure the similarity between images by remapping these visual features from image–text alignment embedding space to image–image alignment feature space. Besides, we build an adaptive knowledge fusion component to automatically fuse prior knowledge from pre-training model and task-specific new knowledge from MITA and IIA module. These components comprise the proposed dual-alignment CLIP (DA-CLIP) method and extensive experiments on 12 remote sensing datasets validate its effectiveness.
ISSN:1939-1404
2151-1535