Points2Model: a neural-guided 3D building wireframe reconstruction from airborne LiDAR point clouds

3D building wireframe models offer a simple, flexible, yet effective means of digitally representing real-world buildings with numerous application benefits. However, generating them from airborne LiDAR point clouds (APCs) is challenging due to issues like façade/roof occlusions, point density varia...

Full description

Saved in:
Bibliographic Details
Main Authors: Perpetual Hope Akwensi, Akshay Bharadwaj, Ruisheng Wang
Format: Article
Language:English
Published: Taylor & Francis Group 2025-08-01
Series:International Journal of Digital Earth
Subjects:
Online Access:https://www.tandfonline.com/doi/10.1080/17538947.2025.2458682
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:3D building wireframe models offer a simple, flexible, yet effective means of digitally representing real-world buildings with numerous application benefits. However, generating them from airborne LiDAR point clouds (APCs) is challenging due to issues like façade/roof occlusions, point density variations and noise. To create accurate building wireframe models effectively in the face of these issues, we propose explicitly learning to fill in the areas of occlusion in the APC and implicitly learning to enhance the point resolution via up-sampling for effective primitive extraction. To generate wireframe models from the up-sampled points, we developed a corner-edge hypothesis and selection strategy, where optimal corner and edge candidates and their accurate assembly are determined via a set of constraints. Experiments conducted on data from the Building3D dataset demonstrate that our proposed pipeline can effectively reconstruct wireframe models from APCs despite its challenges. Ablations and comparison with other existing methods further show the need for point completion and up-sampling processes in surface reconstruction pipelines.
ISSN:1753-8947
1753-8955