GS-GVINS: A Tightly-Integrated GNSS-Visual-Inertial Navigation System Augmented by 3D Gaussian Splatting

Accurate navigation is critical for autonomous vehicles in today’s diverse traffic environments. Integrating Global Satellite Navigation System (GNSS), Inertial Navigation System (INS), and camera has demonstrated significant robustness and high accuracy for navigation in complex environm...

Full description

Saved in:
Bibliographic Details
Main Authors: Zelin Zhou, Shichuang Nie, Saurav Uprety, Hongzhou Yang
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11080066/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Accurate navigation is critical for autonomous vehicles in today’s diverse traffic environments. Integrating Global Satellite Navigation System (GNSS), Inertial Navigation System (INS), and camera has demonstrated significant robustness and high accuracy for navigation in complex environments. However, most integrated systems rely on feature-tracking based visual odometry, which suffers from the problem of feature sparsity, high dynamics, significant illumination changes, etc. Recently, the emergence of 3D Gaussian Splatting (3DGS) has drawn significant attention in the area of 3D map reconstruction and visual SLAM. While extensive research has explored 3DGS for indoor trajectory tracking using visual sensor alone or in combination with Light Detection and Ranging (LiDAR) and Inertial Measurement Unit (IMU), its integration with GNSS for large-scale outdoor navigation remains underexplored. To address these concerns, we propose GS-GVINS: a tightly-integrated GNSS-Visual-Inertial Navigation System augmented by 3DGS. This system leverages 3D Gaussian as a continuous differentiable scene representation in large-scale outdoor environments, enhancing navigation performance through the constructed 3D Gaussian map. Notably, GS-GVINS is the first GNSS-Visual-Inertial navigation application that directly utilizes the analytical jacobians of SE3 camera pose with respect to 3D Gaussians. To maintain the quality of 3DGS rendering in extreme dynamic states, we introduce a motion-aware 3D Gaussian pruning mechanism, updating the map based on relative pose translation and the accumulated opacity along the camera ray. For validation, we test our system under different driving environments: open-sky, suburban, and urban. Both self-collected and public datasets are used for evaluation. The results demonstrate the effectiveness of GS-GVINS in enhancing navigation accuracy across diverse driving environments.
ISSN:2169-3536