GSFix3D: Diffusion-Guided Repair of Novel Views in Gaussian Splatting

1Technical University of Munich, 2ETH Zurich

TL;DR: Remove artifacts and fill holes for novel views in 3DGS scenes.

Abstract

Recent developments in 3D Gaussian Splatting have significantly enhanced novel view synthesis, yet generating high-quality renderings from extreme novel viewpoints or partially observed regions remains challenging. Meanwhile, diffusion models exhibit strong generative capabilities, but their reliance on text prompts and lack of awareness of specific scene information hinder accurate 3D reconstruction tasks. To address these limitations, we introduce GSFix3D, a novel framework that improves the visual fidelity in under-constrained regions by distilling prior knowledge from diffusion models into 3D representations, while preserving consistency with observed scene details. At its core is GSFixer, a latent diffusion model obtained via our customized fine-tuning protocol that can leverage both mesh and 3D Gaussians to adapt pretrained generative models to a variety of environments and artifact types from different reconstruction methods, enabling robust novel view repair for unseen camera poses. Moreover, we propose a random mask augmentation strategy that empowers GSFixer to plausibly inpaint missing regions. Experiments on challenging benchmarks demonstrate that our GSFix3D and GSFixer achieve state-of-the-art performance, requiring only minimal scene-specific fine-tuning on captured data. Real-world test further confirms its resilience to potential pose errors.

Method Overview

Given initial 3D reconstructions in the form of mesh and 3DGS, we render novel views and use them as conditional inputs to GSFixer. Through a reverse diffusion process, GSFixer generates repaired images with artifacts removed and missing regions inpainted. These outputs are then distilled back into 3D by optimizing the 3DGS representation using photometric loss.

Qualitative Results

We compare our method against DIFIX and DIFIX-ref on novel views from the ScanNet++ and Replica datasets.

ScanNet++

DIFIX
DIFIX
SplaTAM
SplaTAM
DIFIX-ref
DIFIX-ref
SplaTAM
SplaTAM
GSFixer
GSFixer
SplaTAM
SplaTAM
DIFIX
DIFIX
RTG-SLAM
RTG-SLAM
DIFIX-ref
DIFIX-ref
RTG-SLAM
RTG-SLAM
GSFixer
GSFixer
RTG-SLAM
RTG-SLAM
DIFIX
DIFIX
GSFusion
GSFusion
DIFIX-ref
DIFIX-ref
GSFusion
GSFusion
GSFixer
GSFixer
GSFusion
GSFusion

Replica

DIFIX
DIFIX
SplaTAM
SplaTAM
DIFIX-ref
DIFIX-ref
SplaTAM
SplaTAM
GSFixer
GSFixer
SplaTAM
SplaTAM
DIFIX
DIFIX
RTG-SLAM
RTG-SLAM
DIFIX-ref
DIFIX-ref
RTG-SLAM
RTG-SLAM
GSFixer
GSFixer
RTG-SLAM
RTG-SLAM
DIFIX
DIFIX
GSFusion
GSFusion
DIFIX-ref
DIFIX-ref
GSFusion
GSFusion
GSFixer
GSFixer
GSFusion
GSFusion

Real-World Results

We collect a stereo sequence inside a ship structure using an Intel RealSense D455 camera and process it with GSFusion. Also, we evaluate an outdoor scene from the FAST-LIVO dataset using Gaussian-LIC, a LiDAR-Inertial-Camera Gaussian Splatting SLAM system.

Self-collected Ship Data

GSFixer3D
GSFixer3D
GSFusion
GSFusion
GSFixer3D
GSFixer3D
GSFusion
GSFusion
GSFixer3D
GSFixer3D
GSFusion
GSFusion
GSFixer3D
GSFixer3D
GSFusion
GSFusion

FAST-LIVO Outdoor Scene

GSFix3D
GSFix3D
Gaussian-LIC
Gaussian-LIC
GSFix3D
GSFix3D
Gaussian-LIC
Gaussian-LIC
GSFixer3D
GSFixer3D
Gaussian-LIC
Gaussian-LIC
GSFixer3D
GSFixer3D
Gaussian-LIC
Gaussian-LIC

Acknowledgment

The authors gratefully acknowledge support from the EU project AUTOASSESS (Grant 101120732). We also thank Jaehyung Jung and Sebastián Barbas Laina for their assistance with ship data collection and processing, and Helen Oleynikova for her valuable feedback on the manuscript.

BibTeX


      @article{gsfix3d,
        title={GSFix3D: Diffusion-Guided Repair of Novel Views in Gaussian Splatting}, 
        author={Jiaxin Wei and Stefan Leutenegger and Simon Schaefer},
        year={2025},
        eprint={2508.14717},
        archivePrefix={arXiv},
        primaryClass={cs.CV},
        url={https://arxiv.org/abs/2508.14717},
      }