Density-Aware Tree–Graph Cross-Message Passing for LiDAR Point Cloud 3D Object Detection
LiDAR-based 3D object detection is fundamental in autonomous driving but remains challenging due to the irregularity, unordered nature, and non-uniform density of point clouds. Existing methods primarily rely on either graph-based or tree-based representations: Graph-based models capture fine-graine...
Saved in:
| Main Authors: | , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-06-01
|
| Series: | Remote Sensing |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2072-4292/17/13/2177 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | LiDAR-based 3D object detection is fundamental in autonomous driving but remains challenging due to the irregularity, unordered nature, and non-uniform density of point clouds. Existing methods primarily rely on either graph-based or tree-based representations: Graph-based models capture fine-grained local geometry, while tree-based approaches encode hierarchical global semantics. However, these paradigms are often used independently, limiting their overall representational capacity. In this paper, we propose density-aware tree–graph cross-message passing (DA-TGCMP), a unified framework that exploits the complementary strengths of both structures to enable more expressive and robust feature learning. Specifically, we introduce a density-aware graph construction (DAGC) strategy that adaptively models geometric relationships in regions with varying point density and a hierarchical tree representation (HTR) that captures multi-scale contextual information. To bridge the gap between local precision and global contexts, we design a tree–graph cross-message-passing (TGCMP) mechanism that enables bidirectional interaction between graph and tree features. The experimental results of three large-scale benchmarks, KITTI, nuScenes, and Waymo, show that our method achieves competitive performance. Specifically, under the moderate difficulty setting, DA-TGCMP outperforms VoPiFNet by approximately 2.59%, 0.49%, and 3.05% in the car, pedestrian, and cyclist categories, respectively. |
|---|---|
| ISSN: | 2072-4292 |