BARD-GS: Blur-Aware Reconstruction of Dynamic Scenes via Gaussian Splatting

Case Western Reserve University

Input Blurry Video

First Stage

Second Stage

Given a set of severe motion blurred images with moving objects, BARD-GS first models the camera motion during exposure time with learnable camerea poses to restore the sharp static regions. Then in the second stage, object motion modeled by trajectory of Gaussians, to achieve deblurring in dynamic regions.

Abstract

3D Gaussian Splatting (3DGS) has shown remarkable potential for static scene reconstruction, and recent advancements have extended its application to dynamic scenes. However, the quality of reconstructions depends heavily on high-quality input images and precise camera poses, which is not that trivial to fulfill in the real-world scenarios. Capturing dynamic scenes with handheld monocular cameras, for instance, typically involves simultaneous movement of both the camera and objects within a single exposure. This combined motion frequently results in image blur that existing methods cannot adequately handle. To address these challenges, we introduce BARD-GS, a novel approach for robust dynamic scene reconstruction that effectively handles blurry inputs and imprecise camera poses. BARD-GS comprises two main components: 1) camera motion deblurring and 2) object motion deblurring. By explicitly decomposing motion blur into camera motion blur and object motion blur and modeling them separately, we achieve significantly improved rendering results in dynamic regions. In addition, we collect a real-world motion blur dataset of dynamic scenes to evaluate our approach. Extensive experiments demonstrate that BARD-GS effectively reconstructs high-quality dynamic scenes under realistic conditions, significantly outperforming existing methods.

Motion Blur Formation

MY ALT TEXT

The formation process of motion blur. It originates from two sources: camera-induced blur caused by camera movements during exposure, and object-induced blur resulting from fast objects moving. The static regions of a scene are affected solely by camera motion blur, while dynamic regions are impacted by both camera and object motion blur.

Pipeline

MY ALT TEXT

An overview of the pipeline. Our method consists of two stages: camera motion deblur and object motion deblur. In the first stage, we handle camera motion blur by modeling the camera's trajectory during each exposure, resulting in sharp reconstruction in the static regions. Then, we utilized the optimized camera poses together with the depth map obtain from DepthAnything to initialize the dynamic Gaussians. In the second stage, we address object motion blur by modeling the trajectory of 3D Gaussians within each exposure using deformation field, which allow us to achieve clear reconstruction in the dynamic regions.

Results on Deblurring

Here we demonstrate the performance of BARD-GS in the deblurring task compared to baseline methods. Since Deformable 3D Gaussian (D3DGS) and 4DGS are not inherently designed to handle motion blur, we apply a per-frame image deblurring method, MPRNet, as preprocessing for these methods. As illustrated, BARD-GS achieves notably superior results in dynamic regions, such as the cat's face and the spinning paper windmill.

D3DGS + MPRNet

4DGS+MPRNet

DyBluRF

BARD-GS (Ours)

Results on Novel View Synthesis

In this section, we evaluate the performance of BARD-GS on the novel view synthesis task using our proposed real-world blurry dataset. Due to the significantly more severe motion blur present in our dataset compared to synthesized datasets, image deblurring methods such as MPRNet may struggle or fail, leading to poor reconstruction quality in both static and dynamic regions.

D3DGS + MPRNet

4DGS + MPRNet

DyBluRF

BARD-GS (Ours)

Real-world Blurry Dataset

In the absence of existing dynamic scene datasets with motion blur, we collect a real-world dataset to address this gap. Different from synthetic datasets used in previous works, where blurry images are generated by averaging consecutive frames, our dataset captures motion blur that closely aligns with real-world scenarios. We provide paired blurry and sharp images captured from diverse environments.

The first row shows blurry videos and the second row is their corresponding sharp videos.


Citation

If you find our work helpful, please consider cite us:

@article{lu2025bard,
  title={BARD-GS: Blur-Aware Reconstruction of Dynamic Scenes via Gaussian Splatting},
  author={Lu, Yiren and Zhou, Yunlai and Liu, Disheng and Liang, Tuo and Yin, Yu},
  journal={arXiv preprint arXiv:2503.15835},
  year={2025}
}