GSMem: 3D Gaussian Splatting as Persistent Spatial Memory for Zero-Shot Embodied Exploration and Reasoning

1Case Western Reserve University
2Spatial AI & Robotics Lab, University at Buffalo
Corresponding author
*denotes equal contribution

Abstract

MY ALT TEXT

Effective embodied exploration requires agents to accumulate and retain spatial knowledge over time. However, existing scene representations, such as discrete scene graphs or static view-based snapshots, lack post-hoc re-observability. If an initial observation misses a target, the resulting memory omission is often irrecoverable. To bridge this gap, we propose GSMem, a zero-shot embodied exploration and reasoning framework built upon 3D Gaussian Splatting (3DGS). By explicitly parameterizing continuous geometry and dense appearance, 3DGS serves as a persistent spatial memory that endows the agent with Spatial Recollection: the ability to render photorealistic novel views from optimal, previously unoccupied viewpoints. To operationalize this, GSMem employs a retrieval mechanism that simultaneously leverages parallel object-level scene graphs and semantic-level language fields. This complementary design robustly localizes target regions, enabling the agent to “hallucinate” optimal views for high-fidelity Vision-Language Model (VLM) reasoning. Furthermore, we introduce a hybrid exploration strategy that combines VLM-driven semantic scoring with a 3DGS-based coverage objective, balancing task-aware exploration with geometric coverage. Extensive experiments on embodied question answering and lifelong navigation demonstrate the robustness and effectiveness of our framework.

Multi-level Retrieval-Rendering

MY ALT TEXT

Demonstration of Multi-level Retrieval-Rendering. We retrieve task-relevant regions from two complementary cues: object-level candidates ranked by a VLM from the scene graph, and semantic-level regions retrieved from the 3D language field using CLIP-based target descriptions. Semantic Gaussians are grouped into spatially coherent clusters via neighborhood connectivity, and the most relevant clusters are kept as semantic ROIs. For each ROI, we select the best rendering view with a sample-then-score strategy. The selected view is rendered and sent to the VLM for reasoning; if evidence is still insufficient, the agent continues exploration.

Hybrid Exploration Strategy

MY ALT TEXT

Demonstration of our Hybrid Exploration Strategy. We follow frontier-based exploration and score each candidate frontier with two complementary criteria: a VLM-based semantic relevance score for task usefulness, and a geometry-oriented information score that estimates expected map improvement from the 3DGS rendering Jacobians. The policy is a simple decision rule: if any frontier has sufficiently high semantic relevance, we choose the most semantically relevant one; otherwise, we switch to the frontier with the highest geometric score to maximize informative coverage.

Real-world Demo

In this section, we provide a real-world demo deployed on Unitree Go2 to showcase the real-world capability of our GSMem framework. The video demonstrates the agent's ability to perform zero-shot embodied exploration and reasoning in a real-world environment, leveraging the persistent spatial memory provided by 3D Gaussian Splatting to navigate and interact with its surroundings effectively.


Citation

If you find our work helpful, please consider cite us:


          place holder for bibtex entry, coming soon