LSAP: A Learned Structure-Aware Pruning Point Cloud Classification Approach for Cyber Physical Social Intelligence
To enhance the ability of Cyber Physical Social Intelligence (CPSI) to process three-dimensional spatial information, we investigate the problem of object detection in 3D Light laser Detection And Ranging (LiDAR) point clouds. Traditional point-based processing pipelines typically reduce memory and...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Tsinghua University Press
2025-06-01
|
| Series: | Big Data Mining and Analytics |
| Subjects: | |
| Online Access: | https://www.sciopen.com/article/10.26599/BDMA.2024.9020069 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | To enhance the ability of Cyber Physical Social Intelligence (CPSI) to process three-dimensional spatial information, we investigate the problem of object detection in 3D Light laser Detection And Ranging (LiDAR) point clouds. Traditional point-based processing pipelines typically reduce memory and computational burden by progressively downsampling the input point cloud using task-agnostic random sampling or farthest point sampling. However, these methods often ignore the inherent structural information amongst the points. To address this gap, we introduce a novel structure-aware point cloud pruning technique that utilizes normal vector scoring to capture the critical structural details within point clouds. Furthermore, we implement a Learning-based Structure-Aware Pruning (LSAP) method to avoid the high computational cost associated with top-k selection algorithms. Comprehensive experiments on the ModelNet40 dataset verify the effectiveness of our method. Notably, our LSAP compressed model achieves a remarkable accuracy of 91.2% on ModelNet40 dataset with a pruning rate of 93.5%, attaining up to an 15.9× reduction in FLoating point Operations Per second (FLOPs) with less than 2.3% accuracy degradation. This significantly outperforms baseline pruning methods, delivering an accuracy improvement of 1.3%. |
|---|---|
| ISSN: | 2096-0654 2097-406X |