Creating Hair-raising Graphics: How Researchers Replicate Natural Hair in Gaming

Gaming enthusiasts are always looking for more realistic and well-designed characters in their games. But can game graphics accurately depict natural-looking hair? This is where artificial intelligence (AI) comes in.

Traditionally, creating realistic hair in games relied on time-consuming manual work by artists. However, this approach is difficult to scale and can be influenced by the limitations of 3D authoring tools. Building a diverse dataset of real-world hair variations, such as curly, straight, silky, and wavy, is also a major challenge. But researchers at State Key Labs and Meta Reality Labs have found a solution.

The Method: Density Volumes and CT Scanners

The researchers developed a method using density volumes to reconstruct various hairstyle graphics from real-world hair wigs. Unlike image-based approaches, this method allows them to look through the hair. The density volumes were created using computed tomography (CT), a technique that utilizes X-rays for high-resolution and large scan volumes. While CT is commonly used to reconstruct human tissues, it’s more challenging to recover complete hair strands due to the thin structure of the hair. To overcome this, the researchers followed a coarse-to-fine approach.

First, they estimated a 3D orientation field from a noisy density volume (a real hair wig) and extracted useful guide strands based on this field. Then, they populated the scalp with strands using a neural interpolation method and refined it through optimization to accurately conform to the input density volume. The optimization step focused on aligning the reconstructed hair strands with the input volume. Notably, this approach allowed for the recovery of diverse hairstyles without relying on hand-crafted priors.

Comparison with Image-Based Methods

The researchers compared their method with three image-based approaches: single-view-based, sparse-view-based, and dense-view-based methods. While single-view-based and sparse-view-based methods produced reasonable results for simple hairstyles, they struggled with curly hair due to limited training datasets. The dense-view-based method outperformed the other two but failed to infer interior geometry, resulting in incomplete graphics. In contrast, the researchers’ model showcased good geometry and intricate details, making the hair look more realistic.

Challenges and Future Work

Extending this approach to capture realistic human heads remains challenging. CT scanners used in the industry emit X-rays at levels that exceed safety limits for living organisms, making it infeasible to model facial geometry using this method. Furthermore, even slight motion during the capture process can lead to significant blurriness in the density volume.

However, the researchers believe that future work focused on machine learning approaches could generate a large corpus of high-quality 3D hair data. This would enable the inference of 3D hair models even from low-resolution density volumes using medical CT scanners.

To learn more about this research, check out the paper and visit the GitHub page and project page.

Source link

Stay in the Loop

Get the daily email from AI Headliner that makes reading the news actually enjoyable. Join our mailing list to stay in the loop to stay informed, for free.

Latest stories

You might also like...