An Efficient Retinal Fluid Segmentation Network Based on Large Receptive Field Context Capture for Optical Coherence Tomography Images

Optical Coherence Tomography (OCT) is a crucial imaging modality for diagnosing and monitoring retinal diseases. However, the accurate segmentation of fluid regions and lesions remains challenging due to noise, low contrast, and blurred edges in OCT images. Although feature modeling with wide or glo...

Full description

Saved in:
Bibliographic Details
Main Authors: Hang Qi, Weijiang Wang, Hua Dang, Yueyang Chen, Minli Jia, Xiaohua Wang
Format: Article
Language:English
Published: MDPI AG 2025-01-01
Series:Entropy
Subjects:
Online Access:https://www.mdpi.com/1099-4300/27/1/60
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Optical Coherence Tomography (OCT) is a crucial imaging modality for diagnosing and monitoring retinal diseases. However, the accurate segmentation of fluid regions and lesions remains challenging due to noise, low contrast, and blurred edges in OCT images. Although feature modeling with wide or global receptive fields offers a feasible solution, it typically leads to significant computational overhead. To address these challenges, we propose LKMU-Lite, a lightweight U-shaped segmentation method tailored for retinal fluid segmentation. LKMU-Lite integrates a Decoupled Large Kernel Attention (DLKA) module that captures both local patterns and long-range dependencies, thereby enhancing feature representation. Additionally, it incorporates a Multi-scale Group Perception (MSGP) module that employs Dilated Convolutions with varying receptive field scales to effectively predict lesions of different shapes and sizes. Furthermore, a novel Aggregating-Shift decoder is proposed, reducing model complexity while preserving feature integrity. With only 1.02 million parameters and a computational complexity of 3.82 G FLOPs, LKMU-Lite achieves state-of-the-art performance across multiple metrics on the ICF and RETOUCH datasets, demonstrating both its efficiency and generalizability compared to existing methods.
ISSN:1099-4300