Many-Light Rendering Projects

Members: Rui Wang, Yuchi Huo, Hujun Bao

Spherical Gaussian-based Lightcuts for Glossy Interreflections

Yuchi Huo, Shihao Jin, Tao Liu, Wei Hua, Rui Wang, Hujun Bao
State Key Lab of CAD&CG, Zhejiang University

Accepted by Computer Graphics Forum

Abstract:

It is still challenging to render directional but non-specular reflections in complex scenes. The SG-based (Spherical Gaussian) many-light framework provides a scalable solution but still requires a large number of glossy virtual lights to avoid spikes as well as reduce clamping errors. Directly gathering contributions from these glossy virtual lights to each pixel in a pairwise way is very inefficient. In this paper, we propose an adaptive algorithm with tighter error bounds to efficiently compute glossy inter- reflections from glossy virtual lights. This approach is an extension of the Lightcuts that builds hierarchies on both lights and pixels with new error bounds and new GPU-based traversal methods between light and pixel hierarchies. Results demonstrate that our method is able to faithfully and efficiently compute glossy interreflections in scenes with highly glossy and spatial vary- ing reflectance. Compared with the conventional Lightcuts method, our approach generates light cuts with only one fourth to one fifth light nodes therefore exhibits better scalability. Additionally, after being implemented on GPU, our algorithms achieves a magnitude of faster performance than the previous method.

Download:

Paper

Adaptive Matrix Column Sampling and Completion
for Rendering Participating Media

Yuchi Huo, Rui Wang, Tianlei Hu, Wei Hua, Hujun Bao
State Key Lab of CAD&CG, Zhejiang University

ACM Transactions on Graphics (TOG), 35(6), 11 pages, ACM SIGGRAPH ASIA 2016

Abstract:

Several scalable many-light rendering methods have been proposed recently for the efficient computation of global illumination. However, gathering contributions of virtual lights in participating media remains an inefficient and time-consuming task. In this paper, we present a novel sparse sampling and reconstruction method to accelerate the gathering step of the many-light rendering for participating media. Our technique explores the observation that the scattered lightings are usually locally coherent and of low rank even in heterogeneous media. In particular, we first introduce a matrix formation with light segments as columns and eye ray segments as rows, and formulate the gathering step into a matrix sampling and reconstruction problem. We then propose an adaptive matrix column sampling and completion algorithm to efficiently reconstruct the matrix by only sampling a small number of elements. Experimental results show that our approach greatly improves the performance, and obtains up to one order of magnitude speedup compared with other state-of-the-art methods of many-light rendering for participating media.

Download:

Paper
Supplemental Document

A Matrix Sampling-and-Recovery Approach for Many-Lights Rendering

Yuchi Huo, Rui Wang, Shihao Jin, Xinguo Liu, Hujun Bao
State Key Lab of CAD&CG, Zhejiang University

ACM Transactions on Graphics (TOG), 34(6), 12 pages, ACM SIGGRAPH ASIA 2015

Abstract:

Instead of computing on a large number of virtual point lights (VPLs), scalable many-lights rendering methods effectively simulate various illumination effects only using hundreds or thousands of representative VPLs. However, gathering illuminations from these representative VPLs, especially computing the visibility, is still a tedious and time-consuming task. In this paper, we propose a new matrix sampling-and-recovery scheme to efficiently gather illuminations by only sampling a small number of visibilities between representative VPLs and surface points. Our approach is based on the observation that the lighting matrix used in manylights rendering is of low-rank, so that it is possible to sparsely sample a small number of entries, and then numerically complete the entire matrix. We propose a three-step algorithm to explore this observation. First, we design a new VPL clustering algorithm to slice the rows and group the columns of the full lighting matrix into a number of reduced matrices, which are sampled and recovered individually. Second, we propose a novel prediction method that predicts visibility of matrix entries from sparsely and randomly sampled entries. Finally, we adapt the matrix separation technique to recover the entire reduced matrix and compute final shadings. Experimental results show that our method heavily reduces the required visibility sampling in the final gathering and achieves 3-7 times speedup compared with the state-of-the-art methods on test scenes.

Download:

Paper
Supplemental Document

GPU-based Out-of-Core Many-Lights Rendering

Rui Wang, Yuchi Huo, Yazhen Yuan, Kun Zhou, Wei Hua, Hujun Bao
State Key Lab of CAD&CG, Zhejiang University

ACM Transactions on Graphics (TOG), 32(6), Article 210, 10 pages, ACM SIGGRAPH ASIA 2013

Abstract:

In this paper, we present a GPU-based out-of-core rendering approach under the many-lights rendering framework. Many-lights rendering is an efficient and scalable rendering framework for a large number of lights. But when the data sizes of lights and geometry are both beyond the in-core memory storage size, the data management of these two out-of-core data becomes critical. In our approach, we formulate such a data management as a graph traversal optimization problem that first builds out-of-core lights and geometry data into a graph, and then guides shading computations by finding a shortest path to visit all vertices in the graph. Based on the proposed data management, we develop a GPU-based out-of-GPU-core rendering algorithm that manages data between the CPU host memory and the GPU device memory. Two main steps are taken in the algorithm: the out-of-core data preparation to pack data into optimal data layouts for the many-lights rendering, and the out-of-core shading using graph-based data management. We demonstrate our algorithm on scenes with out-of-core detailed geometry and out-of-core lights. Results show that our approach generates complex global illumination effects with increased data access coherence and has one order of magnitude performance gain over the CPU-based approach.

Download:

Paper
Supplemental Document