A Structurally Flexible Occupancy Network for 3-D Target Reconstruction Using 2-D SAR Images
Driven by deep learning, three-dimensional (3-D) target reconstruction from two-dimensional (2-D) synthetic aperture radar (SAR) images has been developed. However, there is still room for improvement in the reconstruction quality. In this paper, we propose a structurally flexible occupancy network...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Remote Sensing |
Subjects: | |
Online Access: | https://www.mdpi.com/2072-4292/17/2/347 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832587599420588032 |
---|---|
author | Lingjuan Yu Jianlong Liu Miaomiao Liang Xiangchun Yu Xiaochun Xie Hui Bi Wen Hong |
author_facet | Lingjuan Yu Jianlong Liu Miaomiao Liang Xiangchun Yu Xiaochun Xie Hui Bi Wen Hong |
author_sort | Lingjuan Yu |
collection | DOAJ |
description | Driven by deep learning, three-dimensional (3-D) target reconstruction from two-dimensional (2-D) synthetic aperture radar (SAR) images has been developed. However, there is still room for improvement in the reconstruction quality. In this paper, we propose a structurally flexible occupancy network (SFONet) to achieve high-quality reconstruction of a 3-D target using one or more 2-D SAR images. The SFONet consists of a basic network and a pluggable module that allows it to switch between two input modes: one azimuthal image and multiple azimuthal images. Furthermore, the pluggable module is designed to include a complex-valued (CV) long short-term memory (LSTM) submodule and a CV attention submodule, where the former extracts structural features of the target from multiple azimuthal SAR images, and the latter fuses these features. When two input modes coexist, we also propose a two-stage training strategy. The basic network is trained in the first stage using one azimuthal SAR image as the input. In the second stage, the basic network trained in the first stage is fixed, and only the pluggable module is trained using multiple azimuthal SAR images as the input. Finally, we construct an experimental dataset containing 2-D SAR images and 3-D ground truth by utilizing the publicly available Gotcha echo dataset. Experimental results show that once the SFONet is trained, a 3-D target can be reconstructed using one or more azimuthal images, exhibiting higher quality than other deep learning-based 3-D reconstruction methods. Moreover, when the composition of a training sample is reasonable, the number of samples required for the SFONet training can be reduced. |
format | Article |
id | doaj-art-cf404567760041b5bf6b7755b3305cdf |
institution | Kabale University |
issn | 2072-4292 |
language | English |
publishDate | 2025-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Remote Sensing |
spelling | doaj-art-cf404567760041b5bf6b7755b3305cdf2025-01-24T13:48:12ZengMDPI AGRemote Sensing2072-42922025-01-0117234710.3390/rs17020347A Structurally Flexible Occupancy Network for 3-D Target Reconstruction Using 2-D SAR ImagesLingjuan Yu0Jianlong Liu1Miaomiao Liang2Xiangchun Yu3Xiaochun Xie4Hui Bi5Wen Hong6Jiangxi Province Key Laboratory of Multidimensional Intelligent Perception and Control, School of Information Engineering, Jiangxi University of Science and Technology, Ganzhou 341000, ChinaJiangxi Province Key Laboratory of Multidimensional Intelligent Perception and Control, School of Information Engineering, Jiangxi University of Science and Technology, Ganzhou 341000, ChinaJiangxi Province Key Laboratory of Multidimensional Intelligent Perception and Control, School of Information Engineering, Jiangxi University of Science and Technology, Ganzhou 341000, ChinaJiangxi Province Key Laboratory of Multidimensional Intelligent Perception and Control, School of Information Engineering, Jiangxi University of Science and Technology, Ganzhou 341000, ChinaSchool of Physics and Electronic Information, Gannan Normal University, Ganzhou 341000, ChinaCollege of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, ChinaAerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100194, ChinaDriven by deep learning, three-dimensional (3-D) target reconstruction from two-dimensional (2-D) synthetic aperture radar (SAR) images has been developed. However, there is still room for improvement in the reconstruction quality. In this paper, we propose a structurally flexible occupancy network (SFONet) to achieve high-quality reconstruction of a 3-D target using one or more 2-D SAR images. The SFONet consists of a basic network and a pluggable module that allows it to switch between two input modes: one azimuthal image and multiple azimuthal images. Furthermore, the pluggable module is designed to include a complex-valued (CV) long short-term memory (LSTM) submodule and a CV attention submodule, where the former extracts structural features of the target from multiple azimuthal SAR images, and the latter fuses these features. When two input modes coexist, we also propose a two-stage training strategy. The basic network is trained in the first stage using one azimuthal SAR image as the input. In the second stage, the basic network trained in the first stage is fixed, and only the pluggable module is trained using multiple azimuthal SAR images as the input. Finally, we construct an experimental dataset containing 2-D SAR images and 3-D ground truth by utilizing the publicly available Gotcha echo dataset. Experimental results show that once the SFONet is trained, a 3-D target can be reconstructed using one or more azimuthal images, exhibiting higher quality than other deep learning-based 3-D reconstruction methods. Moreover, when the composition of a training sample is reasonable, the number of samples required for the SFONet training can be reduced.https://www.mdpi.com/2072-4292/17/2/347three-dimensional target reconstruction2-D SAR imagecomplex-valued attention mechanismcomplex-valued long short-term memorystructurally flexible occupancy network |
spellingShingle | Lingjuan Yu Jianlong Liu Miaomiao Liang Xiangchun Yu Xiaochun Xie Hui Bi Wen Hong A Structurally Flexible Occupancy Network for 3-D Target Reconstruction Using 2-D SAR Images Remote Sensing three-dimensional target reconstruction 2-D SAR image complex-valued attention mechanism complex-valued long short-term memory structurally flexible occupancy network |
title | A Structurally Flexible Occupancy Network for 3-D Target Reconstruction Using 2-D SAR Images |
title_full | A Structurally Flexible Occupancy Network for 3-D Target Reconstruction Using 2-D SAR Images |
title_fullStr | A Structurally Flexible Occupancy Network for 3-D Target Reconstruction Using 2-D SAR Images |
title_full_unstemmed | A Structurally Flexible Occupancy Network for 3-D Target Reconstruction Using 2-D SAR Images |
title_short | A Structurally Flexible Occupancy Network for 3-D Target Reconstruction Using 2-D SAR Images |
title_sort | structurally flexible occupancy network for 3 d target reconstruction using 2 d sar images |
topic | three-dimensional target reconstruction 2-D SAR image complex-valued attention mechanism complex-valued long short-term memory structurally flexible occupancy network |
url | https://www.mdpi.com/2072-4292/17/2/347 |
work_keys_str_mv | AT lingjuanyu astructurallyflexibleoccupancynetworkfor3dtargetreconstructionusing2dsarimages AT jianlongliu astructurallyflexibleoccupancynetworkfor3dtargetreconstructionusing2dsarimages AT miaomiaoliang astructurallyflexibleoccupancynetworkfor3dtargetreconstructionusing2dsarimages AT xiangchunyu astructurallyflexibleoccupancynetworkfor3dtargetreconstructionusing2dsarimages AT xiaochunxie astructurallyflexibleoccupancynetworkfor3dtargetreconstructionusing2dsarimages AT huibi astructurallyflexibleoccupancynetworkfor3dtargetreconstructionusing2dsarimages AT wenhong astructurallyflexibleoccupancynetworkfor3dtargetreconstructionusing2dsarimages AT lingjuanyu structurallyflexibleoccupancynetworkfor3dtargetreconstructionusing2dsarimages AT jianlongliu structurallyflexibleoccupancynetworkfor3dtargetreconstructionusing2dsarimages AT miaomiaoliang structurallyflexibleoccupancynetworkfor3dtargetreconstructionusing2dsarimages AT xiangchunyu structurallyflexibleoccupancynetworkfor3dtargetreconstructionusing2dsarimages AT xiaochunxie structurallyflexibleoccupancynetworkfor3dtargetreconstructionusing2dsarimages AT huibi structurallyflexibleoccupancynetworkfor3dtargetreconstructionusing2dsarimages AT wenhong structurallyflexibleoccupancynetworkfor3dtargetreconstructionusing2dsarimages |