Papers
arxiv:2505.09608

LightLab: Controlling Light Sources in Images with Diffusion Models

Published on May 14
· Submitted by Nadav Magar on May 15
Authors:
,
,
,
,
,

Abstract

A diffusion-based method fine-tuned on real and synthetic image pairs provides precise control over light sources and ambient illumination in images, offering more effective relighting than existing methods.

AI-generated summary

We present a simple, yet effective diffusion-based method for fine-grained, parametric control over light sources in an image. Existing relighting methods either rely on multiple input views to perform inverse rendering at inference time, or fail to provide explicit control over light changes. Our method fine-tunes a diffusion model on a small set of real raw photograph pairs, supplemented by synthetically rendered images at scale, to elicit its photorealistic prior for relighting. We leverage the linearity of light to synthesize image pairs depicting controlled light changes of either a target light source or ambient illumination. Using this data and an appropriate fine-tuning scheme, we train a model for precise illumination changes with explicit control over light intensity and color. Lastly, we show how our method can achieve compelling light editing results, and outperforms existing methods based on user preference.

Community

Paper author Paper submitter

We recently published a research paper titled: "LightLab: Controlling Light Sources in Images with Diffusion Models" (accepted to SIGGRAPH 25).
In the paper we demonstrate how to achieve control over visible (and ambient) light sources from a single image.
The premise of the paper is that you can generate physically-accurate training data using classic computational photography, from paired images depicting a change in a visible light source.
We also study how supplementing this small set with synthetic renders affects the trained model's results.
Personally the quality of the results was surprising due to the simplicity of the method, and the relatively low-diversity dataset, which is used in a smart manner.

Project page: https://nadmag.github.io/LightLab/

·

Hi @NadMag ,
I have a question regarding the synthetic data set, when you were rendering the images with Blender, do you generate each light source combination through Blender or do you generate each light source on separately and then generate the final relit image by a post-processing part?

For example, in a scene with 4 light sources, I can have each light source on in a separate image and make the post-processing part to generate an image with lights 1 and 2 on and other combinations, or do you already have this image rendered by Blender with light source 1 and 2 on, and then just extract the light mask for training?

Thanks a lot

5859d1df9eb11b98d64e33ebf6efd9bb.jpg

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2505.09608 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.09608 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.09608 in a Space README.md to link it from this page.

Collections including this paper 10