当前位置: X-MOL 学术ACM Trans. Graph. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
GarVerseLOD: High-Fidelity 3D Garment Reconstruction from a Single In-the-Wild Image using a Dataset with Levels of Details
ACM Transactions on Graphics  ( IF 7.8 ) Pub Date : 2024-11-19 , DOI: 10.1145/3687921
Zhongjin Luo, Haolin Liu, Chenghong Li, Wanghao Du, Zirong Jin, Wanhu Sun, Yinyu Nie, Weikai Chen, Xiaoguang Han

Neural implicit functions have brought impressive advances to the state-of-the-art of clothed human digitization from multiple or even single images. However, despite the progress, current arts still have difficulty generalizing to unseen images with complex cloth deformation and body poses. In this work, we present GarVerseLOD, a new dataset and framework that paves the way to achieving unprecedented robustness in high-fidelity 3D garment reconstruction from a single unconstrained image. Inspired by the recent success of large generative models, we believe that one key to addressing the generalization challenge lies in the quantity and quality of 3D garment data. Towards this end, GarVerseLOD collects 6,000 high-quality cloth models with fine-grained geometry details manually created by professional artists. In addition to the scale of training data, we observe that having disentangled granularities of geometry can play an important role in boosting the generalization capability and inference accuracy of the learned model. We hence craft GarVerseLOD as a hierarchical dataset with levels of details (LOD) , spanning from detail-free stylized shape to pose-blended garment with pixel-aligned details. This allows us to make this highly under-constrained problem tractable by factorizing the inference into easier tasks, each narrowed down with smaller searching space. To ensure GarVerseLOD can generalize well to in-the-wild images, we propose a novel labeling paradigm based on conditional diffusion models to generate extensive paired images for each garment model with high photorealism. We evaluate our method on a massive amount of in-the-wild images. Experimental results demonstrate that GarVerseLOD can generate standalone garment pieces with significantly better quality than prior approaches while being robust against a large variation of pose, illumination, occlusion, and deformation. Code and dataset are available at garverselod.github.io.

中文翻译:


GarVerseLOD:使用具有细节层次的数据集,从单个野外图像进行高保真 3D 服装重建



神经隐式函数为从多个甚至单个图像的穿衣人体数字化技术带来了令人印象深刻的进步。然而,尽管取得了进步,当前的艺术仍然难以推广到具有复杂布料变形和身体姿势的看不见的图像。在这项工作中,我们提出了 GarVerseLOD,这是一个新的数据集和框架,它为从单个不受约束的图像实现高保真 3D 服装重建的前所未有的稳健性铺平了道路。受到最近大型生成模型成功的启发,我们相信解决泛化挑战的一个关键在于 3D 服装数据的数量和质量。为此,GarVerseLOD 收集了 6,000 个高质量的布料模型,这些模型具有由专业艺术家手动创建的精细几何细节。除了训练数据的规模外,我们观察到,拥有解开的几何粒度可以在提高学习模型的泛化能力和推理准确性方面发挥重要作用。因此,我们将 GarVerseLOD 制作为具有细节级别 (LOD) 的分层数据集,从无细节的风格化形状到具有像素对齐细节的姿势混合服装。这使我们能够通过将推理分解为更简单的任务来使这个高度约束不足的问题易于处理,每个任务都通过更小的搜索空间缩小范围。为了确保 GarVerseLOD 能够很好地泛化到野外图像,我们提出了一种基于条件扩散模型的新型标记范式,以为每个服装模型生成具有高度照片真实感的广泛配对图像。我们在大量野外图像上评估了我们的方法。 实验结果表明,GarVerseLOD 可以生成质量明显优于以前方法的独立服装作品,同时对姿势、照明、遮挡和变形的大量变化具有鲁棒性。代码和数据集可在 garverselod.github.io 获取。
更新日期:2024-11-19
down
wechat
bug