AlignFusionNet: Efficient Cross-Modal Alignment and Fusion for 3D Semantic Occupancy Prediction
The environmental perception system is a critical component of autonomous vehicles, and multimodal perception systems significantly enhance perception capabilities by integrating camera and LiDAR data. This paper proposes a novel framework, AlignFusionNet. It effectively combines image and point clo...
Saved in:
| Main Authors: | Ziyi Xu, Legan Qi, Hongzhou Du, Jiaqi Yang, Zhenglin Chen |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11082274/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Fusion-Optimized Multimodal Entity Alignment with Textual Descriptions
by: Chenchen Wang, et al.
Published: (2025-06-01) -
PASeg: positional-guided segmenter with multimodal semantic alignment for enhancing urban scene 3D semantic segmentation
by: Yang Luo, et al.
Published: (2025-08-01) -
TF-CMFA: Robust Multimodal 3D Object Detection for Dynamic Environments Using Temporal Fusion and Cross-Modal Alignment
by: Yujing Wang, et al.
Published: (2025-01-01) -
Alignment-Enhanced Interactive Fusion Model for Complete and Incomplete Multimodal Hand Gesture Recognition
by: Shengcai Duan, et al.
Published: (2023-01-01) -
Multi-level fusion with fine-grained alignment for multimodal sentiment analysis
by: Xiaoge Li, et al.
Published: (2025-06-01)