Multi-Scale Contextual Coding for Human-Machine Vision of Volumetric Medical Images

In recent years, the continuous advancement of digital technologies such as telemedicine and medical cloud computing has promoted collaborative research and diagnosis across multiple medical centers. However, the timely remote transmission and analysis of large volumetric medical images still pose s...

Full description

Saved in:
Bibliographic Details
Main Authors: Jietao Chen, Weijie Chen, Qianjian Xing, Feng Yu
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11121294/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In recent years, the continuous advancement of digital technologies such as telemedicine and medical cloud computing has promoted collaborative research and diagnosis across multiple medical centers. However, the timely remote transmission and analysis of large volumetric medical images still pose significant challenges. While classical methods predominantly employing lossless compression are increasingly constrained by the limits of compression ratios, lossy 3D medical image compression methods are emerging as a promising alternative. Different from the existing 3D convolutional compression algorithms oriented only for human vision, this paper proposes a Multi-scale Contextual Autoencoder (MCAE) architecture that recurrently incorporates anatomical inter-slice context to optimize the compression of the current slice for both human and machine vision. Our decoded intermediate features, with sufficiently preserved semantic information, enable high-quality visualization and allow downstream machine vision tasks (e.g., segmentation and classification) to be performed directly without pixel-level recovery. To reduce the compression bit cost, we create a Multi-Dimensional Entropy Model that integrates inter-slice latent context with spatial-channel context and hierarchical hypercontext. Experimental results demonstrate that our framework obtains an average 9% BD-Rate reduction over the Versatile Video Coding (VVC) anchor on MRNet datasets, while achieving superior recognition performance for downstream segmentation and classification tasks than inputting reconstructed lossy images.
ISSN:2169-3536