Learning Efficient Illumination Multiplexing for
Joint Capture of Reflectance and Shape

Kaizhang Kang, Cihui Xie, Chengan He, Mingqi Yi,
Minyi Gu, Zimin Chen, Kun Zhou and Hongzhi Wu

ACM Trans. Graph. (Proc. SIGGRAPH Asia 2019), 38, 6 (Nov. 2019), 165.
Patent Pending.


We propose a novel framework that automatically learns the lighting patterns for efficient, joint acquisition of unknown reflectance and shape. The core of our framework is a deep neural network, with a shared linear encoder that directly corresponds to the lighting patterns used in physical acquisition, as well as non-linear decoders that output per-pixel normal and diffuse / specular information from photographs. We exploit the diffuse and normal information from multiple views to reconstruct a detailed 3D shape, and then fit BRDF parameters to the diffuse / specular information, producing texture maps as reflectance results. We demonstrate the effectiveness of the framework with physical objects that vary considerably in reflectance and shape, acquired with as few as 16~32 lighting patterns that correspond to 7~15 seconds of per-view acquisition time. Our framework is useful for optimizing the efficiency in both novel and existing setups, as it can automatically adapt to various factors, including the geometry / the lighting layout of the device and the properties of appearance.


Paper [.PDF, Low-res, 6.7MB] [ACM Digital Library]

Bibtex [.BIB]

Video [.MP4, 99.8MB] [Youtube]

Slides [.PDF, 2.1MB]

Code & Data

Our source code is released under the GPLv3 license for acadmic purposes. The only requirement for using the code in your research is to cite our paper[.BIB]. For commercial licensing options, please email hwu at acm.org. For technical issues, please email cocoa_kang at zju.edu.cn.

The link to our source repository is https://github.com/cocoakang/thinking-lightstage.

It contains the scripts for training the neural network proposed in our paper, as well as a module for synthetic training data generation, using the TensorFlow framework. After downloading the repository, please run siga19_source\train.bat (Windows) or train.sh (Linux) to train the network with synthetic lumitexels generated on-the-fly. For more details, please refer to the source code along with its README documents. The current version does not include any test cases yet. Please check the online repository later as we are updating.