Multi-Class Guided GAN for Remote-Sensing Image Synthesis Based on Semantic Labels
In the scenario of limited labeled remote-sensing datasets, the model’s performance is constrained by the insufficient availability of data. Generative model-based data augmentation has emerged as a promising solution to this limitation. While existing generative models perform well in natural scene...
Saved in:
Main Authors: | , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Remote Sensing |
Subjects: | |
Online Access: | https://www.mdpi.com/2072-4292/17/2/344 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832587581241425920 |
---|---|
author | Zhenye Niu Yuxia Li Yushu Gong Bowei Zhang Yuan He Jinglin Zhang Mengyu Tian Lei He |
author_facet | Zhenye Niu Yuxia Li Yushu Gong Bowei Zhang Yuan He Jinglin Zhang Mengyu Tian Lei He |
author_sort | Zhenye Niu |
collection | DOAJ |
description | In the scenario of limited labeled remote-sensing datasets, the model’s performance is constrained by the insufficient availability of data. Generative model-based data augmentation has emerged as a promising solution to this limitation. While existing generative models perform well in natural scene domains (e.g., faces and street scenes), their performance in remote sensing is hindered by severe data imbalance and the semantic similarity among land-cover classes. To tackle these challenges, we propose the Multi-Class Guided GAN (MCGGAN), a novel network for generating remote-sensing images from semantic labels. Our model features a dual-branch architecture with a global generator that captures the overall image structure and a multi-class generator that improves the quality and differentiation of land-cover types. To integrate these generators, we design a shared-parameter encoder for consistent feature encoding across two branches, and a spatial decoder that synthesizes outputs from the class generators, preventing overlap and confusion. Additionally, we employ perceptual loss (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><msub><mrow><mi>L</mi></mrow><mrow><mi>V</mi><mi>G</mi><mi>G</mi></mrow></msub></mrow></semantics></math></inline-formula>) to assess perceptual similarity between generated and real images, and texture matching loss (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><msub><mrow><mi>L</mi></mrow><mrow><mi>T</mi></mrow></msub></mrow></semantics></math></inline-formula>) to capture fine texture details. To evaluate the quality of image generation, we tested multiple models on two custom datasets (one from Chongzhou, Sichuan Province, and another from Wuzhen, Zhejiang Province, China) and a public dataset LoveDA. The results show that MCGGAN achieves improvements of 52.86 in FID, 0.0821 in SSIM, and 0.0297 in LPIPS compared to the Pix2Pix baseline. We also conducted comparative experiments to assess the semantic segmentation accuracy of the U-Net before and after incorporating the generated images. The results show that data augmentation with the generated images leads to an improvement of 4.47% in FWIoU and 3.23% in OA across the Chongzhou and Wuzhen datasets. Experiments show that MCGGAN can be effectively used as a data augmentation approach to improve the performance of downstream remote-sensing image segmentation tasks. |
format | Article |
id | doaj-art-87ba04b9b034406b8379cabf111d5495 |
institution | Kabale University |
issn | 2072-4292 |
language | English |
publishDate | 2025-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Remote Sensing |
spelling | doaj-art-87ba04b9b034406b8379cabf111d54952025-01-24T13:48:12ZengMDPI AGRemote Sensing2072-42922025-01-0117234410.3390/rs17020344Multi-Class Guided GAN for Remote-Sensing Image Synthesis Based on Semantic LabelsZhenye Niu0Yuxia Li1Yushu Gong2Bowei Zhang3Yuan He4Jinglin Zhang5Mengyu Tian6Lei He7School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, ChinaSchool of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, ChinaSchool of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, ChinaSouthwest Institute of Technical Physics, Chengdu 610041, ChinaSouthwest Institute of Technical Physics, Chengdu 610041, ChinaSchool of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, ChinaSchool of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, ChinaSchool of Software Engineering, Chengdu University of Information Technology, Chengdu 610225, ChinaIn the scenario of limited labeled remote-sensing datasets, the model’s performance is constrained by the insufficient availability of data. Generative model-based data augmentation has emerged as a promising solution to this limitation. While existing generative models perform well in natural scene domains (e.g., faces and street scenes), their performance in remote sensing is hindered by severe data imbalance and the semantic similarity among land-cover classes. To tackle these challenges, we propose the Multi-Class Guided GAN (MCGGAN), a novel network for generating remote-sensing images from semantic labels. Our model features a dual-branch architecture with a global generator that captures the overall image structure and a multi-class generator that improves the quality and differentiation of land-cover types. To integrate these generators, we design a shared-parameter encoder for consistent feature encoding across two branches, and a spatial decoder that synthesizes outputs from the class generators, preventing overlap and confusion. Additionally, we employ perceptual loss (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><msub><mrow><mi>L</mi></mrow><mrow><mi>V</mi><mi>G</mi><mi>G</mi></mrow></msub></mrow></semantics></math></inline-formula>) to assess perceptual similarity between generated and real images, and texture matching loss (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><msub><mrow><mi>L</mi></mrow><mrow><mi>T</mi></mrow></msub></mrow></semantics></math></inline-formula>) to capture fine texture details. To evaluate the quality of image generation, we tested multiple models on two custom datasets (one from Chongzhou, Sichuan Province, and another from Wuzhen, Zhejiang Province, China) and a public dataset LoveDA. The results show that MCGGAN achieves improvements of 52.86 in FID, 0.0821 in SSIM, and 0.0297 in LPIPS compared to the Pix2Pix baseline. We also conducted comparative experiments to assess the semantic segmentation accuracy of the U-Net before and after incorporating the generated images. The results show that data augmentation with the generated images leads to an improvement of 4.47% in FWIoU and 3.23% in OA across the Chongzhou and Wuzhen datasets. Experiments show that MCGGAN can be effectively used as a data augmentation approach to improve the performance of downstream remote-sensing image segmentation tasks.https://www.mdpi.com/2072-4292/17/2/344remote-sensing imagesgenerative adversarial networksimage synthesisdata augmentation |
spellingShingle | Zhenye Niu Yuxia Li Yushu Gong Bowei Zhang Yuan He Jinglin Zhang Mengyu Tian Lei He Multi-Class Guided GAN for Remote-Sensing Image Synthesis Based on Semantic Labels Remote Sensing remote-sensing images generative adversarial networks image synthesis data augmentation |
title | Multi-Class Guided GAN for Remote-Sensing Image Synthesis Based on Semantic Labels |
title_full | Multi-Class Guided GAN for Remote-Sensing Image Synthesis Based on Semantic Labels |
title_fullStr | Multi-Class Guided GAN for Remote-Sensing Image Synthesis Based on Semantic Labels |
title_full_unstemmed | Multi-Class Guided GAN for Remote-Sensing Image Synthesis Based on Semantic Labels |
title_short | Multi-Class Guided GAN for Remote-Sensing Image Synthesis Based on Semantic Labels |
title_sort | multi class guided gan for remote sensing image synthesis based on semantic labels |
topic | remote-sensing images generative adversarial networks image synthesis data augmentation |
url | https://www.mdpi.com/2072-4292/17/2/344 |
work_keys_str_mv | AT zhenyeniu multiclassguidedganforremotesensingimagesynthesisbasedonsemanticlabels AT yuxiali multiclassguidedganforremotesensingimagesynthesisbasedonsemanticlabels AT yushugong multiclassguidedganforremotesensingimagesynthesisbasedonsemanticlabels AT boweizhang multiclassguidedganforremotesensingimagesynthesisbasedonsemanticlabels AT yuanhe multiclassguidedganforremotesensingimagesynthesisbasedonsemanticlabels AT jinglinzhang multiclassguidedganforremotesensingimagesynthesisbasedonsemanticlabels AT mengyutian multiclassguidedganforremotesensingimagesynthesisbasedonsemanticlabels AT leihe multiclassguidedganforremotesensingimagesynthesisbasedonsemanticlabels |