Hybrid generative adversarial network based on frequency and spatial domain for histopathological image synthesis

Abstract Background Due to the complexity and cost of preparing histopathological slides, deep learning-based methods have been developed to generate high-quality histological images. However, existing approaches primarily focus on spatial domain information, neglecting the periodic information in t...

Full description

Saved in:
Bibliographic Details
Main Authors: Qifeng Liu, Tao Zhou, Chi Cheng, Jin Ma, Marzia Hoque Tania
Format: Article
Language:English
Published: BMC 2025-01-01
Series:BMC Bioinformatics
Subjects:
Online Access:https://doi.org/10.1186/s12859-025-06057-9
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832571226644545536
author Qifeng Liu
Tao Zhou
Chi Cheng
Jin Ma
Marzia Hoque Tania
author_facet Qifeng Liu
Tao Zhou
Chi Cheng
Jin Ma
Marzia Hoque Tania
author_sort Qifeng Liu
collection DOAJ
description Abstract Background Due to the complexity and cost of preparing histopathological slides, deep learning-based methods have been developed to generate high-quality histological images. However, existing approaches primarily focus on spatial domain information, neglecting the periodic information in the frequency domain and the complementary relationship between the two domains. In this paper, we proposed a generative adversarial network that employs a cross-attention mechanism to extract and fuse features across spatial and frequency domains. The method optimizes frequency domain features using spatial domain guidance and refines spatial features with frequency domain information, preserving key details while eliminating redundancy to generate high-quality histological images. Results Our model incorporates a variable-window mixed attention module to dynamically adjust attention window sizes, capturing both local details and global context. A spectral filtering module enhances the extraction of repetitive textures and periodic structures, while a cross-attention fusion module dynamically weights features from both domains, focusing on the most critical information to produce realistic and detailed images. Conclusions The proposed method achieves efficient spatial-frequency domain fusion, significantly improving image generation quality. Experiments on the Patch Camelyon dataset show superior performance over eight state-of-the-art models across five metrics. This approach advances automated histopathological image generation with potential for clinical applications.
format Article
id doaj-art-3c7bf3f84f6140218158c032adc0535e
institution Kabale University
issn 1471-2105
language English
publishDate 2025-01-01
publisher BMC
record_format Article
series BMC Bioinformatics
spelling doaj-art-3c7bf3f84f6140218158c032adc0535e2025-02-02T12:45:01ZengBMCBMC Bioinformatics1471-21052025-01-0126112310.1186/s12859-025-06057-9Hybrid generative adversarial network based on frequency and spatial domain for histopathological image synthesisQifeng Liu0Tao Zhou1Chi Cheng2Jin Ma3Marzia Hoque Tania4Centre for Big Data Research in Health, University of New South WalesDepartment of Respiratory and Critical Medicine, The Second Affiliated Hospital of Nanchang UniversitySchool of Computer Science and Engineering, University of New South WalesFaculty of Engineering, The University of SydneyCentre for Big Data Research in Health, University of New South WalesAbstract Background Due to the complexity and cost of preparing histopathological slides, deep learning-based methods have been developed to generate high-quality histological images. However, existing approaches primarily focus on spatial domain information, neglecting the periodic information in the frequency domain and the complementary relationship between the two domains. In this paper, we proposed a generative adversarial network that employs a cross-attention mechanism to extract and fuse features across spatial and frequency domains. The method optimizes frequency domain features using spatial domain guidance and refines spatial features with frequency domain information, preserving key details while eliminating redundancy to generate high-quality histological images. Results Our model incorporates a variable-window mixed attention module to dynamically adjust attention window sizes, capturing both local details and global context. A spectral filtering module enhances the extraction of repetitive textures and periodic structures, while a cross-attention fusion module dynamically weights features from both domains, focusing on the most critical information to produce realistic and detailed images. Conclusions The proposed method achieves efficient spatial-frequency domain fusion, significantly improving image generation quality. Experiments on the Patch Camelyon dataset show superior performance over eight state-of-the-art models across five metrics. This approach advances automated histopathological image generation with potential for clinical applications.https://doi.org/10.1186/s12859-025-06057-9Generative adversarial networksCross-attention mechanismSpatial domainFrequency domainHistological slide imagesVariable-window mixed attention
spellingShingle Qifeng Liu
Tao Zhou
Chi Cheng
Jin Ma
Marzia Hoque Tania
Hybrid generative adversarial network based on frequency and spatial domain for histopathological image synthesis
BMC Bioinformatics
Generative adversarial networks
Cross-attention mechanism
Spatial domain
Frequency domain
Histological slide images
Variable-window mixed attention
title Hybrid generative adversarial network based on frequency and spatial domain for histopathological image synthesis
title_full Hybrid generative adversarial network based on frequency and spatial domain for histopathological image synthesis
title_fullStr Hybrid generative adversarial network based on frequency and spatial domain for histopathological image synthesis
title_full_unstemmed Hybrid generative adversarial network based on frequency and spatial domain for histopathological image synthesis
title_short Hybrid generative adversarial network based on frequency and spatial domain for histopathological image synthesis
title_sort hybrid generative adversarial network based on frequency and spatial domain for histopathological image synthesis
topic Generative adversarial networks
Cross-attention mechanism
Spatial domain
Frequency domain
Histological slide images
Variable-window mixed attention
url https://doi.org/10.1186/s12859-025-06057-9
work_keys_str_mv AT qifengliu hybridgenerativeadversarialnetworkbasedonfrequencyandspatialdomainforhistopathologicalimagesynthesis
AT taozhou hybridgenerativeadversarialnetworkbasedonfrequencyandspatialdomainforhistopathologicalimagesynthesis
AT chicheng hybridgenerativeadversarialnetworkbasedonfrequencyandspatialdomainforhistopathologicalimagesynthesis
AT jinma hybridgenerativeadversarialnetworkbasedonfrequencyandspatialdomainforhistopathologicalimagesynthesis
AT marziahoquetania hybridgenerativeadversarialnetworkbasedonfrequencyandspatialdomainforhistopathologicalimagesynthesis