Disentangled Contrastive Learning From Synthetic Matching Pairs for Targeted Chest X-Ray Generation
Disentangled generation enables the synthesis of images with explicit control over disentangled attributes. However, traditional generative models often struggle to independently disentangle these attributes while maintaining the ability to generate entirely new, fully randomized, and diverse synthe...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10844299/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832583998581243904 |
---|---|
author | Euyoung Kim Soochahn Lee Kyoung Mu Lee |
author_facet | Euyoung Kim Soochahn Lee Kyoung Mu Lee |
author_sort | Euyoung Kim |
collection | DOAJ |
description | Disentangled generation enables the synthesis of images with explicit control over disentangled attributes. However, traditional generative models often struggle to independently disentangle these attributes while maintaining the ability to generate entirely new, fully randomized, and diverse synthetic data. In this study, we propose a novel framework for disentangled Chest X-ray (CXR) generation that enables explicit control over person-specific and disease-specific attributes. This framework synthesizes CXR images that preserve the same patient identity—either real or randomly generated—while selectively varying the presence or absence of specific diseases. These synthesized matching-paired CXRs not only augment training datasets but also aid in identifying lesions more effectively by comparing attribute-specific differences between paired images. The proposed method leverages contrastive learning to disentangle latent spaces for patient and disease attributes, modeling these spaces with multivariate Gaussians for precise and exclusive attribute sampling. This disentangled representation enables the training of a controllable generative model capable of manipulating disease attributes in CXR images. Experimental results demonstrate the fidelity and diversity of the generated images through qualitative assessments and quantitative comparisons, outperforming state-of-the-art class-conditional generative adversarial networks on two public CXR datasets. Further experiments on clinical efficacy demonstrate that our method improves disease classification and detection tasks by leveraging data augmentation and employing the difference maps generated from paired images as effective attention maps for lesion localization. These findings underscore the potential of our framework to improve medical imaging analysis and facilitate novel clinical applications. |
format | Article |
id | doaj-art-b9488561ffe14aa7901f90041df5897e |
institution | Kabale University |
issn | 2169-3536 |
language | English |
publishDate | 2025-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj-art-b9488561ffe14aa7901f90041df5897e2025-01-28T00:01:38ZengIEEEIEEE Access2169-35362025-01-0113154531546810.1109/ACCESS.2025.353136610844299Disentangled Contrastive Learning From Synthetic Matching Pairs for Targeted Chest X-Ray GenerationEuyoung Kim0https://orcid.org/0000-0003-0528-6557Soochahn Lee1https://orcid.org/0000-0002-2975-2519Kyoung Mu Lee2https://orcid.org/0000-0001-7210-1036Department of Electrical and Computer Engineering, ASRI, Seoul National University, Seoul, Republic of KoreaSchool of Electrical Engineering, Kookmin University, Seoul, Republic of KoreaDepartment of Electrical and Computer Engineering, ASRI, Seoul National University, Seoul, Republic of KoreaDisentangled generation enables the synthesis of images with explicit control over disentangled attributes. However, traditional generative models often struggle to independently disentangle these attributes while maintaining the ability to generate entirely new, fully randomized, and diverse synthetic data. In this study, we propose a novel framework for disentangled Chest X-ray (CXR) generation that enables explicit control over person-specific and disease-specific attributes. This framework synthesizes CXR images that preserve the same patient identity—either real or randomly generated—while selectively varying the presence or absence of specific diseases. These synthesized matching-paired CXRs not only augment training datasets but also aid in identifying lesions more effectively by comparing attribute-specific differences between paired images. The proposed method leverages contrastive learning to disentangle latent spaces for patient and disease attributes, modeling these spaces with multivariate Gaussians for precise and exclusive attribute sampling. This disentangled representation enables the training of a controllable generative model capable of manipulating disease attributes in CXR images. Experimental results demonstrate the fidelity and diversity of the generated images through qualitative assessments and quantitative comparisons, outperforming state-of-the-art class-conditional generative adversarial networks on two public CXR datasets. Further experiments on clinical efficacy demonstrate that our method improves disease classification and detection tasks by leveraging data augmentation and employing the difference maps generated from paired images as effective attention maps for lesion localization. These findings underscore the potential of our framework to improve medical imaging analysis and facilitate novel clinical applications.https://ieeexplore.ieee.org/document/10844299/Contrastive learningcontrollable generationgenerative adversarial networklatent space disentanglement |
spellingShingle | Euyoung Kim Soochahn Lee Kyoung Mu Lee Disentangled Contrastive Learning From Synthetic Matching Pairs for Targeted Chest X-Ray Generation IEEE Access Contrastive learning controllable generation generative adversarial network latent space disentanglement |
title | Disentangled Contrastive Learning From Synthetic Matching Pairs for Targeted Chest X-Ray Generation |
title_full | Disentangled Contrastive Learning From Synthetic Matching Pairs for Targeted Chest X-Ray Generation |
title_fullStr | Disentangled Contrastive Learning From Synthetic Matching Pairs for Targeted Chest X-Ray Generation |
title_full_unstemmed | Disentangled Contrastive Learning From Synthetic Matching Pairs for Targeted Chest X-Ray Generation |
title_short | Disentangled Contrastive Learning From Synthetic Matching Pairs for Targeted Chest X-Ray Generation |
title_sort | disentangled contrastive learning from synthetic matching pairs for targeted chest x ray generation |
topic | Contrastive learning controllable generation generative adversarial network latent space disentanglement |
url | https://ieeexplore.ieee.org/document/10844299/ |
work_keys_str_mv | AT euyoungkim disentangledcontrastivelearningfromsyntheticmatchingpairsfortargetedchestxraygeneration AT soochahnlee disentangledcontrastivelearningfromsyntheticmatchingpairsfortargetedchestxraygeneration AT kyoungmulee disentangledcontrastivelearningfromsyntheticmatchingpairsfortargetedchestxraygeneration |