Predicting Loose-Fitting Garment Deformations Using Bone-Driven Motion Networks

 

In Special Interest Group on Computer Graphics and Interactive Techniques Conference Proceedings (SIGGRAPH '22 Conference Proceedings)

August 07–11, 2022, Vancouver, BC, Canada. ACM, New York, NY, USA 10 Pages.

Xiaoyu Pan, Jiaming Mai, Xinwei Jiang, Dongxue Tang, Jingxiang Li, Tianjia Shao, Kun Zhou, Xiaogang Jin, Dinesh Manocha

 

Given the body motion sequence and simulation parameters, our approach predicts the deformations of loose-fitting garments effectively. We transfer the body motion sequence to generate motion of garment’s bones (red balls shown in the middle figure on the lower part of the garment). We use these bone-driven motion networks to predict large-scale deformations caused by complex motions, such as flying, swirling and dropping on dresses (left). Our method can also be generalized to simulate dresses under unseen simulation parameters at an interactive rate (right).

Abstract

We present a learning algorithm that uses bone-driven motion networks to predict the deformation of loose-fitting garment meshes at interactive rates. Given a garment, we generate a simulation database and extract virtual bones from simulated mesh sequences using skin decomposition. At runtime, we separately compute low- and high-frequency deformations in a sequential manner. The low-frequency deformations are predicted by transferring body motions to virtual bones’ motions, and the high-frequency deformations are estimated leveraging the global information of virtual bones’ motions our method can estimate garment deformations caused by variations of the simulation parameters (e.g., fabric’s bending stiffness) using an RBF kernel ensembling trained networks for different sets of simulation parameters. Through extensive comparisons, we show that our method outperforms state-of-the-art methods in terms of prediction accuracy of mesh deformations by about 20% in RMSE and 10% in Hausdorff distance and STED.

Download

PDF, 61.8MB Supplementary materials Video,  414.0 MB Datasets and Source codes

Acknowledgments: The authors would like to thank Sammi (Xia Lin), Eric (Ma Bingbing), Eason (Yang Xiajun), and Rambokou (Kou Qilong) from Tencent Institute of Games for contributing useful data and relevant application demonstration assistance during the paper's preparation, which provided significant support for the paper's completion.