Reconstruction Matters: Learning Geometry-Aligned BEV Representation through 3D Gaussian Splatting

1Bosch Research North America & Bosch Center for Artificial Intelligence (BCAI)
2Case Western Reserve University
3Washington University in St. Louis
Corresponding author

Abstract

MY ALT TEXT

Bird's-Eye-View (BEV) perception serves as a cornerstone for autonomous driving, offering a unified spatial representation that fuses surrounding-view images to enable reasoning for various downstream tasks, such as semantic segmentation, 3D object detection, and motion prediction. However, most existing BEV perception frameworks adopt an end-to-end training paradigm, where image features are directly transformed into the BEV space and optimized solely through downstream task supervision. This formulation treats the entire perception process as a black box, often lacking explicit 3D geometric understanding and interpretability, leading to suboptimal performance. In this paper, we claim that an explicit 3D representation matters for accurate BEV perception, and we propose Splat2BEV, a Gaussian Splatting-assisted framework for BEV tasks. Splat2BEV aims to learn BEV feature representations that are both semantically rich and geometrically precise. We first pre-train a Gaussian generator that explicitly reconstructs 3D scenes from multi-view inputs, enabling the generation of geometry-aligned feature representations. These representations are then projected into the BEV space to serve as inputs for downstream tasks. Extensive experiments on nuScenes and argoverse dataset demonstrate that Splat2BEV achieves state-of-the-art performance and validate the effectiveness of incorporating explicit 3D reconstruction into BEV perception.

Pipeline

MY ALT TEXT

An overview of our training process. Given multi-view perspective images as input, Splat2BEV first trains a feed-forward Gaussian generator to reconstruct 3D scene using 3D Gaussian Splatting. In stage 2, the Gaussian generator is frozen, and the reconstructed geometry along with its associated features are projected onto the BEV plane. A BEV encoder and segmentation head are then trained on top of this BEV representation to perform downstream tasks. Finally, in the third stage, the Gaussian generator, BEV encoder, and segmentation head are jointly fine-tuned, allowing geom- etry, semantics, and task-specific cues to be harmonized for optimal BEV perception.

Visualization of Reconstruction Quality

Visualization of reconstruction quality. The left side shows the 3D reconstruction, its feature field, and the corresponding BEV map and projected BEV feature. The right side provides zoomed-in views that highlight fine-grained details.

Results on Downstream Segmentation

In this section, we provide qualitative results of our method on downstream segmentation tasks, including vehicle, pedestrian, and lane segmentation.

Geometry-aligned Features

Visual comparison of features learned with and without explicit reconstruction. The BEV feature refers to the feature map produced by the BEV encoder, while the projected feature denotes the feature directly projected from the 3D representation. The geometry-aligned features learned through explicit reconstruction exhibit sharper boundaries and more distinct object shapes, which are crucial for accurate BEV perception. In contrast, the features learned without explicit reconstruction appear more blurred and less structured.

MY ALT TEXT

Geometry Accuracy

Demonstration of geometry accuracy. Although the perspective images clearly show several cars parked in parallel, the ground-truth BEV labels and GaussianLSS results exhibit significant overlap between vehicles, stemming from projection errors in the 3D bounding-box annotations. In contrast, our method leverages explicit 3D reconstruction to recover accurate spatial boundaries and produces a more faithful BEV segmentation.

MY ALT TEXT

Citation

If you find our work helpful, please consider cite us:


          place holder for bibtex entry, coming soon