Company researchers used a neural network to render synthetic 3D environments in real time. Currently, every object in a virtual world needs to be modeled individually, which is expensive and time consuming. In contrast, the NVIDIA research uses models automatically learned from real video to render objects such as buildings, trees and vehicles.
The technology offers the potential to quickly create virtual worlds for gaming, automotive, architecture, robotics or virtual reality. The network can, for example, generate interactive scenes based on real-world locations or show consumers dancing like their favorite pop stars.
“NVIDIA has been inventing new ways to generate interactive graphics for 25 years, and this is the first time we can do so with a neural network,” said Bryan Catanzaro, vice president of Applied Deep Learning Research at NVIDIA, who led the team developing this work. “Neural networks — specifically generative models — will change how graphics are created. This will enable developers to create new scenes at a fraction of the traditional cost.”
The result of the research is a simple driving game that allows participants to navigate an urban scene. All content is rendered interactively using a neural network that transforms sketches of a 3D world produced by a traditional graphics engine into video. This interactive demo will be shown at the NeurIPS 2018 conference in Montreal.
The generative neural network learned to model the appearance of the world, including lighting, materials and their dynamics. Since the scene is fully synthetically generated, it can be easily edited to remove, modify or add objects.
“The capability to model and recreate the dynamics of our visual world is essential to building intelligent agents,” the researchers wrote in their paper. “Apart from purely scientific interests, learning to synthesize continuous visual experiences has a wide range of applications in computer vision, robotics, and computer graphics,” the researchers explained.