This repository provides a dataloader for the SimVS inconsistent lighting dataset, designed to help researchers experiment on 3D generation from sparse and inconsistent images.
This dataset is part of the research presented in:
SimVS: Simulating World Inconsistencies for Robust View Synthesis
Project Page | CVPR 2025
SimVS addresses the challenge of real-world 3D reconstruction where scenes often contain inconsistencies - objects move, and lighting changes over time. The method uses generative augmentation to simulate inconsistencies and trains a generative model to produce consistent multiview images from sparse, inconsistent inputs. This dataset specifically focuses on scenes with varying illumination conditions.
Install the required dependencies using pip:
pip install -r requirements.txt
The SimVS dataset is available for download here: https://drive.google.com/file/d/1MiqzN4YAqUUKHnftYKOi8iV4PzoN9c_J/view?usp=sharing
This dataset was created specifically to address the lack of existing datasets with multiple illumination conditions and ground truth images under consistent lighting. SimVS provides:
- 5 real-world scenes, each captured under 3 separate lighting conditions
- For each scene, 3 monocular videos taken with approximately the same camera trajectory but different lighting
- Camera pose information for all images, jointly calculated using Hierarchical Localization
Note that some scenes in the dataset are not distortion-corrected.
Each scene in the dataset follows the same directory structure:
scene_name/
├── images/ # All images from all lighting conditions
├── sparse/ # 3D reconstruction information extracted by COLMAP
├── train_list.txt # List of image filenames to use for training
└── test_list.txt # List of image filenames to use for testing
Key components:
images/
: Contains all captured images across different lighting conditionssparse/
: Contains the COLMAP reconstruction data including camera parameters, points, and image registrationstrain_list.txt
: Text file listing the filenames of images designated for trainingtest_list.txt
: Text file listing the filenames of images designated for testing/evaluation
The dataloader uses these files to identify which images belong to which split and to load the appropriate camera parameters and image data.
The dataset is designed for evaluating novel-view synthesis methods under inconsistent lighting. The typical experimental setup involves:
- Using 3 frames (one from each inconsistent video) as input
- Rendering novel views that match the lighting condition of one of the videos
- Evaluating the results against held-out ground truth images from that lighting condition
The dataset contains multiple scenes processed with Colmap, including camera parameters and multi-view images.
The main entry point is colmap_dataloader.py
, which demonstrates how to load a scene from the dataset.
For example, to load the chess scene from the SimVS inconsistent lighting dataset:
python colmap_dataloader.py --data /path/to/simvs/dataset/chess --split train --load-features
The --load-features
flag is required to actually load the image data rather than just the metadata.
This script utilizes functions from the dataloader_helpers
directory to parse Colmap outputs and load image data.
This dataset is compatible with the original WildGaussians training code, allowing you to seamlessly integrate it into the original framework for experiments with inconsistent lighting conditions.
This dataloader is a minimal adaptation from the wonderful WildGaussians repository: https://github.com/jkulhanek/wild-gaussians
Please refer to the original repository for the full WildGaussians method, including training, rendering, and evaluation code.
Authors of the original work: Jonas Kulhanek, Songyou Peng, Zuzana Kukelova, Marc Pollefeys, Torsten Sattler
Paper: WildGaussians: 3D Gaussian Splatting in the Wild (NeurIPS 2024)
If you use this dataset in your research, please cite:
@article{trevithick2024simvs,
title={SimVS: Simulating World Inconsistencies for Robust View Synthesis},
author={Alex Trevithick and Roni Paiss and Philipp Henzler and Dor Verbin and Rundi Wu and Hadi Alzayer and Ruiqi Gao and Ben Poole and Jonathan T. Barron and Aleksander Holynski and Ravi Ramamoorthi and Pratul P. Srinivasan},
journal={arXiv},
year={2024}
}