When it comes to rendering expansive scenes in real-time, standard NeRF methods demand a lot of processing power and video memory. These challenges can make it hard for less powerful devices to handle real-time processing and rendering of large-scale scenes.
What is Cityon-Web?
Cityon-Web is a method that aims to address these challenges. It divides the scene into manageable blocks and uses varying Levels-of-Detail (LOD) to represent it, taking inspiration from traditional graphics methods used for handling large-scale scenes.
How Cityon-Web Works
The researchers use radiance field baking techniques to precompute and store rendering primitives into 3D atlas textures organized within a sparse grid in each block. This allows for real-time rendering without overwhelming the client device’s resources.
By using a “divide and conquer” strategy, Cityon-Web ensures that each block has ample representation capability to reconstruct intricate details within the scene accurately. This, together with the use of levels-of-detail, allows for dynamic resource management. As a result, Cityon-Web significantly reduces the bandwidth and memory requirements of rendering extensive scenes, leading to smoother user experiences, especially on less powerful devices.
According to experiments conducted, City-on-Web can render photorealistic large-scale scenes at 32 frames per second (FPS) with a resolution of 1080p, while using only 18% of the VRAM and 16% of the payload size compared to existing mesh-based methods.
Conclusion
The combination of block partitioning and Levels-of-Detail (LOD) integration has notably decreased the payload on the web platform while enhancing resource management efficiency. This approach guarantees high-fidelity rendering quality by upholding consistency between the training process and the rendering phase.
If you want to check out the detail of the research, you can read the paper and learn more about the project.