Breaking New Ground in Monocular Depth Estimation with Dynamic Iterative Refinement and Scale Consistency
Monocular depth estimation (MDE) is a critical task in computer vision with applications in autonomous driving, robotics, and augmented reality. However, predicting depth from a single image poses significant challenges, especially in dynamic scenes where moving objects introduce scale ambiguity and...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/15/2/674 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Monocular depth estimation (MDE) is a critical task in computer vision with applications in autonomous driving, robotics, and augmented reality. However, predicting depth from a single image poses significant challenges, especially in dynamic scenes where moving objects introduce scale ambiguity and inaccuracies. In this paper, we propose the Dynamic Iterative Monocular Depth Estimation (DI-MDE) framework, which integrates an iterative refinement process with a novel scale-alignment module to address these issues. Our approach combines elastic depth bins that adjust dynamically based on uncertainty estimates with a scale-alignment mechanism to ensure consistency between static and dynamic regions. Leveraging self-supervised learning, DI-MDE does not require ground truth depth labels, making it scalable and applicable to real-world environments. Experimental results on standard datasets such as SUN RGB-D and KITTI demonstrate that our method achieves state-of-the-art performance, significantly improving depth prediction accuracy in dynamic scenes. This work contributes a robust and efficient solution to the challenges of monocular depth estimation, offering advancements in both depth refinement and scale consistency. |
---|---|
ISSN: | 2076-3417 |