Neural Radiance Fields for High-Fidelity Soft Tissue Reconstruction in Endoscopy

The advancement of neural radiance fields (NeRFs) has facilitated the high-quality 3D reconstruction of complex scenes. However, for most NeRFs, reconstructing 3D tissues from endoscopy images poses significant challenges due to the occlusion of soft tissue regions by invalid pixels, deformations in...

Full description

Saved in:
Bibliographic Details
Main Authors: Jinhua Liu, Yongsheng Shi, Dongjin Huang, Jiantao Qu
Format: Article
Language:English
Published: MDPI AG 2025-01-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/25/2/565
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The advancement of neural radiance fields (NeRFs) has facilitated the high-quality 3D reconstruction of complex scenes. However, for most NeRFs, reconstructing 3D tissues from endoscopy images poses significant challenges due to the occlusion of soft tissue regions by invalid pixels, deformations in soft tissue, and poor image quality, which severely limits their application in endoscopic scenarios. To address the above issues, we propose a novel framework to reconstruct high-fidelity soft tissue scenes from low-quality endoscopic images. We first construct an EndoTissue dataset of soft tissue regions in endoscopic images and fine-tune the Segment Anything Model (SAM) based on EndoTissue to obtain a potent segmentation network. Given a sequence of monocular endoscopic images, this segmentation network can quickly obtain the tissue mask images. Additionally, we incorporate tissue masks into a dynamic scene reconstruction method called Tensor4D to effectively guide the reconstruction of 3D deformable soft tissues. Finally, we propose adopting the image enhancement model EDAU-Net to improve the quality of the rendered views. The experimental results show that our method can effectively focus on the soft tissue regions in the image, achieving higher fidelity in detail and geometric structural integrity in reconstruction compared to state-of-the-art algorithms. Feedback from the user study indicates high participant scores for our method.
ISSN:1424-8220