In the context of text-to-3D, the key challenge lies in lifting 2D diffusion to 3D generation. The existing methods face difficulties in creating geometry due to the absence of a geometric prior and the intricate interplay of materials and lighting in natural images. To tackle this, a team of researchers from Alibaba have proposed a Normal-Depth diffusion model named RichDreamer, designed to provide a robust geometric foundation for high-fidelity text-to-3D geometry generation.
Challenges in Text-to-3D Conversion
Existing methods have shown promise by first creating the geometry through score-distillation sampling (SDS) applied to rendered surface normals, followed by appearance modeling. However, relying on a 2D RGB diffusion model to optimize surface normals is suboptimal due to the distribution discrepancy between natural images and normals maps, leading to instability in optimization. This model proposes to learn a generalizable Normal-Depth diffusion model for 3D generation.
Addressing the Challenges with RichDreamer
The challenges of lifting from 2D to 3D become apparent, including multi-view constraints and the inherent coupling of surface geometry, texture, and lighting in natural images. The proposed Normal-Depth diffusion model aims to overcome these challenges by learning a joint distribution of normal and depth information, effectively describing scene geometry. The model is trained on the extensive LAION dataset, showcasing remarkable generalization abilities. The team fine-tunes the model on a synthetic dataset, demonstrating its capability to learn diverse distributions of normal and depth in real-world scenes.
Improved Material Generation
To address mixed illumination effects in generated materials, an albedo diffusion model is introduced to impose data-driven constraints on the albedo component. This enhances the disentanglement of reflectance and illumination effects, contributing to more accurate and detailed results.
The RichDreamer Model
The geometry generation process involves score distillation sampling (SDS) and the integration of the proposed Normal-Depth diffusion model into the Fantasia3D pipeline. The team explores the use of the model for optimizing Neural Radiance Fields (NeRF) and demonstrates its effectiveness in enhancing geometric reconstructions.
Analyzing Results and Conclusion
The appearance modeling aspect involves a Physically-Based Rendering (PBR) Disney material model, and the researchers introduce an albedo diffusion model for improved material generation. The evaluation of the proposed method demonstrates superior performance in both geometry and textured model generation compared to state-of-the-art approaches.
In conclusion, the research team presents a pioneering approach to 3D generation through the introduction of a Normal-Depth diffusion model, addressing critical challenges in text-to-3D modeling. The method showcases significant improvements in geometry and appearance modeling, setting a new standard in the field. Future directions include extending the approach to text-to-scene generation and exploring additional aspects of appearance modeling. If you like our work, you will love our newsletter…Check out the Paper and Project.