This folder contains the input used in the novel-view synthesis benchmark in the paper.
We provide data in the following sites:
- Blenheim Palace
- 05 for training and in-sequence evaluation;
- 01 for out-of-sequence evaluation
- Keble College 04
- The earlier part around the quad is for training and in-sequence evaluation
- The later part within the lawn is for out-of-sequence evaluation
- Radcliffe Observatory Quarter (ROQ) 01
- The earlier part near the fountain is for training and in-sequence evaluation;
- The later part back to the fountain with a different route is for out-of-sequence evaluation
For each sequence, we provide:
- Images_train_eval: images from the three cameras. Note that for each image there is a prefix of either “eval” or “train” to indicate the training/test split. This is to comply with nerfstudio’s configuration. See this example to run nerfstudio with this data
- Sparse: COLMAP results, which is optionally used by 3DGS for Gaussian initialisation
- Transforms_train_eval.json: the metadata for training a NeRF. It includes each camera’s pose and the camera parameters. This can be used directly by nerfstudio.
Additional note on nerf evaluation with lighting changes
Our camera uses auto-exposure, which is crucial for capturing scenes with different lighting conditions. From the example below, you can see how auto-exposure allows us to capture the dark dome ceiling (blue) and building outside (red).
This also brings up the challenge of colour consistency. In the above example, the colour of the same building is different due to change in exposure time. In the example below, the colour of the same building is different not only because of the exposure change, but also the lighting condition is different - the data is captured from different dates.
Therefore, we also upload an additional colmap result of the Bodleian Library 01+02. This can facilitate research to handle the lighting inconsistency to produce reconstruction with a uniform texture across a longer temporal horizon.