An Attention-Based Residual U-Net for Tumour Segmentation Using Multi-Modal MRI Brain Images

Detecting brain tumours is challenging due to the complex brain anatomy and wide range of tumour sizes, shapes, and locations. A crucial stage in diagnosing and treating brain tumours is automatically segmenting the tumour area from brain MRI. It involves the precise delineation of tumour boundaries...

Full description

Saved in:
Bibliographic Details
Main Authors: Najme Zehra Naqvi, K. R. Seeja
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10838527/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Detecting brain tumours is challenging due to the complex brain anatomy and wide range of tumour sizes, shapes, and locations. A crucial stage in diagnosing and treating brain tumours is automatically segmenting the tumour area from brain MRI. It involves the precise delineation of tumour boundaries within MRI scans, which helps to understand the tumour’s extent, monitor its growth, plan treatment strategies, and assess treatment response over time. Hence, this research proposes a novel automated deep-learning approach based on U-Net for segmenting Glioma tumours. The basic U-Net model is enhanced with several components to improve its performance in the proposed model. The U-Net’s encoder has an improved MCA (Multi-scale Context Attention) module designed to extract and collect rich spatial contextual information from the input image. The proposed U-Net’s decoder uses a Squeeze and Excitation module and residual blocks. The residual blocks help reduce network degradation and gradient disappearance, enabling the model to retain important information during decoding. The Squeeze and Excitation module allows the model to retrieve high-level semantic properties and a high level of spatial context, which have been collected from the encoder module and IMCA-Block. The performance of proposed model is evaluated on two datasets BraTS 2020 and BraTS 2018. The experiments on both datasets demonstrate that the proposed framework enhances multi-modal MRI brain tumour segmentation performance on all metrics evaluated. For BraTS 2020 it achieved Dice Coefficient of 0.9978, 0.9378 and 0.9478 for WT (Whole tumour), TC (Tumour core), and ET (Enhancing Tumour) respectively and for BraTS 2018 it achieved Dice Coefficient 98.32, 93.32 and 92.32 for WT (Whole tumour), TC (Tumour core), and ET (Enhancing Tumour) respectively.
ISSN:2169-3536